Dec 10 14:31:34 crc systemd[1]: Starting Kubernetes Kubelet... Dec 10 14:31:34 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 10 14:31:34 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:35 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 10 14:31:36 crc kubenswrapper[4727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 14:31:36 crc kubenswrapper[4727]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 10 14:31:36 crc kubenswrapper[4727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 14:31:36 crc kubenswrapper[4727]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 14:31:36 crc kubenswrapper[4727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 10 14:31:36 crc kubenswrapper[4727]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.169935 4727 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173015 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173040 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173048 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173054 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173059 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173065 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173071 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173076 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173082 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173087 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173092 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173096 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173107 4727 feature_gate.go:330] unrecognized feature gate: Example Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173113 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173117 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173122 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173127 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173131 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173136 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173141 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173146 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173151 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173156 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173161 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173165 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173169 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173174 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173178 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173183 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173190 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173195 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173201 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173205 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173211 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173216 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173222 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173227 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173232 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173238 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173244 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173250 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173255 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173260 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173265 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173269 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173274 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173278 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173284 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173290 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173295 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173300 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173304 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173309 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173315 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173321 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173327 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173332 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173337 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173344 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173350 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173355 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173359 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173364 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173368 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173373 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173377 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173382 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173386 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173390 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173394 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.173399 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173884 4727 flags.go:64] FLAG: --address="0.0.0.0" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173901 4727 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173929 4727 flags.go:64] FLAG: --anonymous-auth="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173936 4727 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173944 4727 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173949 4727 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173956 4727 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173963 4727 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173969 4727 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173975 4727 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173981 4727 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173986 4727 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173992 4727 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.173998 4727 flags.go:64] FLAG: --cgroup-root="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174003 4727 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174009 4727 flags.go:64] FLAG: --client-ca-file="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174014 4727 flags.go:64] FLAG: --cloud-config="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174019 4727 flags.go:64] FLAG: --cloud-provider="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174024 4727 flags.go:64] FLAG: --cluster-dns="[]" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174032 4727 flags.go:64] FLAG: --cluster-domain="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174037 4727 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174043 4727 flags.go:64] FLAG: --config-dir="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174049 4727 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174055 4727 flags.go:64] FLAG: --container-log-max-files="5" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174063 4727 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174068 4727 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174074 4727 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174080 4727 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174085 4727 flags.go:64] FLAG: --contention-profiling="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174091 4727 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174096 4727 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174102 4727 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174107 4727 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174115 4727 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174120 4727 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174126 4727 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174131 4727 flags.go:64] FLAG: --enable-load-reader="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174137 4727 flags.go:64] FLAG: --enable-server="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174142 4727 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174151 4727 flags.go:64] FLAG: --event-burst="100" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174156 4727 flags.go:64] FLAG: --event-qps="50" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174162 4727 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174167 4727 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174172 4727 flags.go:64] FLAG: --eviction-hard="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174179 4727 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174184 4727 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174190 4727 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174196 4727 flags.go:64] FLAG: --eviction-soft="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174202 4727 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174208 4727 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174213 4727 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174224 4727 flags.go:64] FLAG: --experimental-mounter-path="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174230 4727 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174236 4727 flags.go:64] FLAG: --fail-swap-on="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174241 4727 flags.go:64] FLAG: --feature-gates="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174249 4727 flags.go:64] FLAG: --file-check-frequency="20s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174254 4727 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174261 4727 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174267 4727 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174273 4727 flags.go:64] FLAG: --healthz-port="10248" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174278 4727 flags.go:64] FLAG: --help="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174284 4727 flags.go:64] FLAG: --hostname-override="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174289 4727 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174295 4727 flags.go:64] FLAG: --http-check-frequency="20s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174300 4727 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174306 4727 flags.go:64] FLAG: --image-credential-provider-config="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174311 4727 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174316 4727 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174322 4727 flags.go:64] FLAG: --image-service-endpoint="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174327 4727 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174332 4727 flags.go:64] FLAG: --kube-api-burst="100" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174337 4727 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174343 4727 flags.go:64] FLAG: --kube-api-qps="50" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174348 4727 flags.go:64] FLAG: --kube-reserved="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174354 4727 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174359 4727 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174364 4727 flags.go:64] FLAG: --kubelet-cgroups="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174370 4727 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174375 4727 flags.go:64] FLAG: --lock-file="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174381 4727 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174386 4727 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174392 4727 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174402 4727 flags.go:64] FLAG: --log-json-split-stream="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174409 4727 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174415 4727 flags.go:64] FLAG: --log-text-split-stream="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174422 4727 flags.go:64] FLAG: --logging-format="text" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174427 4727 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174433 4727 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174439 4727 flags.go:64] FLAG: --manifest-url="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174444 4727 flags.go:64] FLAG: --manifest-url-header="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174452 4727 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174457 4727 flags.go:64] FLAG: --max-open-files="1000000" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174464 4727 flags.go:64] FLAG: --max-pods="110" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174470 4727 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174475 4727 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174481 4727 flags.go:64] FLAG: --memory-manager-policy="None" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174487 4727 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174492 4727 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174498 4727 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174503 4727 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174517 4727 flags.go:64] FLAG: --node-status-max-images="50" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174523 4727 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174529 4727 flags.go:64] FLAG: --oom-score-adj="-999" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174535 4727 flags.go:64] FLAG: --pod-cidr="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174540 4727 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174550 4727 flags.go:64] FLAG: --pod-manifest-path="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174554 4727 flags.go:64] FLAG: --pod-max-pids="-1" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174560 4727 flags.go:64] FLAG: --pods-per-core="0" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174564 4727 flags.go:64] FLAG: --port="10250" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174569 4727 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174574 4727 flags.go:64] FLAG: --provider-id="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174579 4727 flags.go:64] FLAG: --qos-reserved="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174584 4727 flags.go:64] FLAG: --read-only-port="10255" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174589 4727 flags.go:64] FLAG: --register-node="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174594 4727 flags.go:64] FLAG: --register-schedulable="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174601 4727 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174612 4727 flags.go:64] FLAG: --registry-burst="10" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174618 4727 flags.go:64] FLAG: --registry-qps="5" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174624 4727 flags.go:64] FLAG: --reserved-cpus="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174630 4727 flags.go:64] FLAG: --reserved-memory="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174638 4727 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174645 4727 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174651 4727 flags.go:64] FLAG: --rotate-certificates="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174657 4727 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174662 4727 flags.go:64] FLAG: --runonce="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174668 4727 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174673 4727 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174709 4727 flags.go:64] FLAG: --seccomp-default="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174716 4727 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174722 4727 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174727 4727 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174733 4727 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174739 4727 flags.go:64] FLAG: --storage-driver-password="root" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174744 4727 flags.go:64] FLAG: --storage-driver-secure="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174750 4727 flags.go:64] FLAG: --storage-driver-table="stats" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174756 4727 flags.go:64] FLAG: --storage-driver-user="root" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174761 4727 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174767 4727 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174772 4727 flags.go:64] FLAG: --system-cgroups="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174778 4727 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174786 4727 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174791 4727 flags.go:64] FLAG: --tls-cert-file="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174797 4727 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174805 4727 flags.go:64] FLAG: --tls-min-version="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174810 4727 flags.go:64] FLAG: --tls-private-key-file="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174815 4727 flags.go:64] FLAG: --topology-manager-policy="none" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174821 4727 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174826 4727 flags.go:64] FLAG: --topology-manager-scope="container" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174832 4727 flags.go:64] FLAG: --v="2" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174839 4727 flags.go:64] FLAG: --version="false" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174846 4727 flags.go:64] FLAG: --vmodule="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174853 4727 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.174859 4727 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.174997 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175005 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175011 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175016 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175021 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175025 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175029 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175034 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175038 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175042 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175047 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175051 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175056 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175060 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175066 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175072 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175077 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175082 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175086 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175091 4727 feature_gate.go:330] unrecognized feature gate: Example Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175095 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175100 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175105 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175110 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175115 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175121 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175125 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175130 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175135 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175140 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175144 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175148 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175153 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175157 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175161 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175166 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175170 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175175 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175180 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175184 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175190 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175195 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175200 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175204 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175209 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175215 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175221 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175227 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175232 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175238 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175245 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175250 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175256 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175261 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175266 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175270 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175275 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175280 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175285 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175289 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175294 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175300 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175306 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175312 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175318 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175323 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175329 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175334 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175339 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175345 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.175350 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.175536 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.250622 4727 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.250674 4727 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250825 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250836 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250840 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250844 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250862 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250867 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250872 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250876 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250880 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250883 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250887 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250891 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250944 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250948 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250952 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250955 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250959 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250963 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250966 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250971 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250975 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250979 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250984 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250988 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250992 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.250996 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251002 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251021 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251025 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251031 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251038 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251043 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251048 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251052 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251056 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251061 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251064 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251069 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251073 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251078 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251082 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251101 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251105 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251110 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251114 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251139 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251143 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251147 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251150 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251154 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251157 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251175 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251179 4727 feature_gate.go:330] unrecognized feature gate: Example Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251183 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251188 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251192 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251198 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251202 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251207 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251212 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251216 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251220 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251224 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251228 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251231 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251235 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251254 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251259 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251262 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251267 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251271 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.251279 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251455 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251467 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251472 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251492 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251496 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251501 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251504 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251509 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251513 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251517 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251520 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251523 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251527 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251531 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251535 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251539 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251542 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251546 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251550 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251580 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251585 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251590 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251595 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251599 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251603 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251608 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251612 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251616 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251620 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251625 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251652 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251657 4727 feature_gate.go:330] unrecognized feature gate: Example Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251664 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251669 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251674 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251679 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251683 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251687 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251691 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251696 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251700 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251707 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251711 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251735 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251739 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251744 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251748 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251753 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251757 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251761 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251765 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251769 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251773 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251777 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251782 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251787 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251811 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251817 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251821 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251827 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251833 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251838 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251843 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251847 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251853 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251857 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251862 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251866 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251872 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251895 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.251913 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.251923 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.270384 4727 server.go:940] "Client rotation is on, will bootstrap in background" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.274888 4727 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.275063 4727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.275667 4727 server.go:997] "Starting client certificate rotation" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.275704 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.276381 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-20 20:28:13.764638368 +0000 UTC Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.276549 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.301455 4727 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.303757 4727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.304042 4727 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.311312 4727 log.go:25] "Validated CRI v1 runtime API" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.325876 4727 log.go:25] "Validated CRI v1 image API" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.328114 4727 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.331000 4727 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-10-14-27-21-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.331044 4727 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.370377 4727 manager.go:217] Machine: {Timestamp:2025-12-10 14:31:36.369129459 +0000 UTC m=+0.563904021 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:bd5527bc-7a25-4b1d-868d-d32d9da06147 BootID:2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:00:92:aa Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:00:92:aa Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f8:39:7c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:78:a4:26 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:71:71:b5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:63:9a:fb Speed:-1 Mtu:1496} {Name:eth10 MacAddress:1a:5a:17:f4:d7:02 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:d0:6a:99:e8:90 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.370624 4727 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.370787 4727 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.371601 4727 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.371982 4727 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.372031 4727 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.372323 4727 topology_manager.go:138] "Creating topology manager with none policy" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.372335 4727 container_manager_linux.go:303] "Creating device plugin manager" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.372675 4727 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.372988 4727 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.373359 4727 state_mem.go:36] "Initialized new in-memory state store" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.373967 4727 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.374618 4727 kubelet.go:418] "Attempting to sync node with API server" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.374640 4727 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.374682 4727 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.374708 4727 kubelet.go:324] "Adding apiserver pod source" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.374724 4727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.393168 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.393358 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.397933 4727 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.398493 4727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.399346 4727 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.399955 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.399976 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.399984 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.399990 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400004 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400014 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400021 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400034 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400043 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400050 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400063 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400071 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400254 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.400992 4727 server.go:1280] "Started kubelet" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.401252 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.401370 4727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.401369 4727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.402408 4727 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 10 14:31:36 crc systemd[1]: Started Kubernetes Kubelet. Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.403683 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187fe11da596adfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 14:31:36.400940542 +0000 UTC m=+0.595715104,LastTimestamp:2025-12-10 14:31:36.400940542 +0000 UTC m=+0.595715104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.405494 4727 server.go:460] "Adding debug handlers to kubelet server" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.406699 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.406799 4727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.407425 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:16:14.260742538 +0000 UTC Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.407495 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 477h44m37.853255037s for next certificate rotation Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.407608 4727 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.408134 4727 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.407757 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.408091 4727 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.408766 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.408885 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.408723 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="200ms" Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.494007 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.494248 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.494264 4727 factory.go:55] Registering systemd factory Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.494303 4727 factory.go:221] Registration of the systemd container factory successfully Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.499623 4727 factory.go:153] Registering CRI-O factory Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.499677 4727 factory.go:221] Registration of the crio container factory successfully Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.499835 4727 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.500027 4727 factory.go:103] Registering Raw factory Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.500060 4727 manager.go:1196] Started watching for new ooms in manager Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.509408 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.509755 4727 manager.go:319] Starting recovery of all containers Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.511496 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.511661 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.511739 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.511811 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.511893 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.511971 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512042 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512117 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512181 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512241 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512298 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512370 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512433 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512501 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512593 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512660 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512726 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512796 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512859 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.512991 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.513058 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.513120 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.513192 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.513287 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.513396 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.513485 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.513685 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.513770 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.513889 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514055 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514133 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514203 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514277 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514360 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514432 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514507 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514570 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514720 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514795 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514858 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.514940 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515027 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515112 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515204 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515295 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515385 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515480 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515573 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515644 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515728 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515811 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.515887 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516003 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516080 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516180 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516246 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516330 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516392 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516454 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516515 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516572 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516641 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516704 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516776 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516843 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516921 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.516995 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517069 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517135 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517204 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517275 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517335 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517393 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517503 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517564 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517629 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517699 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517772 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517831 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.517967 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.518056 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.518118 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.518182 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.518239 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.518318 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.518412 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519153 4727 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519259 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519336 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519402 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519468 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519587 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519678 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519759 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519841 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.519943 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520055 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520133 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520193 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520250 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520307 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520394 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520468 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520526 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520621 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520765 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.520867 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521023 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521108 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521186 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521320 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521424 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521454 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521480 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521505 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521524 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521540 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521556 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521572 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521589 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521604 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521621 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521668 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521689 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521706 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521723 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521738 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521752 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521771 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521790 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521806 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521824 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521841 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521916 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521941 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521960 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521978 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.521996 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522014 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522031 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522048 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522065 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522080 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522098 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522115 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522132 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522148 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522166 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522181 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522203 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522223 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522240 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522257 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522276 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522293 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522311 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522330 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522349 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522368 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522388 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522405 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522421 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522472 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522489 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522509 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522525 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522546 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522562 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522578 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522594 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522616 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522660 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522676 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522692 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522732 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522748 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522763 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522782 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522797 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522816 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522832 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522846 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522863 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522917 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522937 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522955 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522973 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.522990 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523012 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523028 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523047 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523061 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523081 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523096 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523115 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523133 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523148 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523163 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523181 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523198 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523213 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523228 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523243 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523260 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523277 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523292 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523312 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523327 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523339 4727 reconstruct.go:97] "Volume reconstruction finished" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.523363 4727 reconciler.go:26] "Reconciler: start to sync state" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.527512 4727 manager.go:324] Recovery completed Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.546393 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.549227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.549272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.549285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.550341 4727 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.550364 4727 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.550407 4727 state_mem.go:36] "Initialized new in-memory state store" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.559631 4727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.561663 4727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.561737 4727 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.561777 4727 kubelet.go:2335] "Starting kubelet main sync loop" Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.561841 4727 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.562013 4727 policy_none.go:49] "None policy: Start" Dec 10 14:31:36 crc kubenswrapper[4727]: W1210 14:31:36.565258 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.565351 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.565464 4727 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.565528 4727 state_mem.go:35] "Initializing new in-memory state store" Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.609793 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.626349 4727 manager.go:334] "Starting Device Plugin manager" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.626405 4727 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.626418 4727 server.go:79] "Starting device plugin registration server" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.626847 4727 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.626863 4727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.627339 4727 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.627420 4727 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.627427 4727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.635171 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.662122 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.662302 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.663710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.663761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.663771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.663976 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.664477 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.664616 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.664963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.665004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.665014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.665133 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.665363 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.665426 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.666304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.666331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.666341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.666430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.666454 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.666468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.666504 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.666656 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.666720 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.667795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.667819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.667845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.667803 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.667865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.667847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.667870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.667932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.667936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.668125 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.668141 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.668172 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.669243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.669347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.669385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.669286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.669457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.669471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.670685 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.670763 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.672314 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.672363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.672376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.694976 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="400ms" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726173 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726233 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726260 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726279 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726301 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726319 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726356 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726524 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726559 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726633 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726664 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.726697 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.727204 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.729513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.729634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.729716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.729835 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.730427 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828428 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828464 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828490 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828512 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828528 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828543 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828560 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828588 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828607 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828622 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828637 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828654 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828669 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.828682 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829142 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829224 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829221 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829254 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829256 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829277 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829358 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829360 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829311 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829375 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829415 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829516 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.829516 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.930599 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.932236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.932325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.932345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4727]: I1210 14:31:36.932393 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:36 crc kubenswrapper[4727]: E1210 14:31:36.933149 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.000539 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.008996 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:37 crc kubenswrapper[4727]: W1210 14:31:37.029430 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-2ac9aded730d7bf0791d35f0a46814b7a1e61b05c193cf79800562c5b15a8183 WatchSource:0}: Error finding container 2ac9aded730d7bf0791d35f0a46814b7a1e61b05c193cf79800562c5b15a8183: Status 404 returned error can't find the container with id 2ac9aded730d7bf0791d35f0a46814b7a1e61b05c193cf79800562c5b15a8183 Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.032606 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:37 crc kubenswrapper[4727]: W1210 14:31:37.033323 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-801d5b23bc2f71fc992054f134f2aadd184a5186ed8c157b679ac8db6cc1fdb3 WatchSource:0}: Error finding container 801d5b23bc2f71fc992054f134f2aadd184a5186ed8c157b679ac8db6cc1fdb3: Status 404 returned error can't find the container with id 801d5b23bc2f71fc992054f134f2aadd184a5186ed8c157b679ac8db6cc1fdb3 Dec 10 14:31:37 crc kubenswrapper[4727]: W1210 14:31:37.047230 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-faaaca5c2a19ef989e4496139f7e6b8502636856a17cee6ac7457ae800e80c54 WatchSource:0}: Error finding container faaaca5c2a19ef989e4496139f7e6b8502636856a17cee6ac7457ae800e80c54: Status 404 returned error can't find the container with id faaaca5c2a19ef989e4496139f7e6b8502636856a17cee6ac7457ae800e80c54 Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.050826 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.055984 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:37 crc kubenswrapper[4727]: E1210 14:31:37.097404 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="800ms" Dec 10 14:31:37 crc kubenswrapper[4727]: W1210 14:31:37.197695 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8e8db75f3e9884c94183dd98e7d5a8fec06c13c59d3374feaedb6ddc0aeac8ff WatchSource:0}: Error finding container 8e8db75f3e9884c94183dd98e7d5a8fec06c13c59d3374feaedb6ddc0aeac8ff: Status 404 returned error can't find the container with id 8e8db75f3e9884c94183dd98e7d5a8fec06c13c59d3374feaedb6ddc0aeac8ff Dec 10 14:31:37 crc kubenswrapper[4727]: W1210 14:31:37.203630 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-def7943ff01de148105afce8d3dbb296c2a188e1f44761dc02ed8d7ef8edddbb WatchSource:0}: Error finding container def7943ff01de148105afce8d3dbb296c2a188e1f44761dc02ed8d7ef8edddbb: Status 404 returned error can't find the container with id def7943ff01de148105afce8d3dbb296c2a188e1f44761dc02ed8d7ef8edddbb Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.333612 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.335013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.335062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.335075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.335104 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:37 crc kubenswrapper[4727]: E1210 14:31:37.335716 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.402570 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.571195 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2ac9aded730d7bf0791d35f0a46814b7a1e61b05c193cf79800562c5b15a8183"} Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.572706 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"def7943ff01de148105afce8d3dbb296c2a188e1f44761dc02ed8d7ef8edddbb"} Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.574083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e8db75f3e9884c94183dd98e7d5a8fec06c13c59d3374feaedb6ddc0aeac8ff"} Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.575285 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"faaaca5c2a19ef989e4496139f7e6b8502636856a17cee6ac7457ae800e80c54"} Dec 10 14:31:37 crc kubenswrapper[4727]: I1210 14:31:37.576470 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"801d5b23bc2f71fc992054f134f2aadd184a5186ed8c157b679ac8db6cc1fdb3"} Dec 10 14:31:37 crc kubenswrapper[4727]: W1210 14:31:37.611971 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:37 crc kubenswrapper[4727]: E1210 14:31:37.612174 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:37 crc kubenswrapper[4727]: W1210 14:31:37.663860 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:37 crc kubenswrapper[4727]: E1210 14:31:37.664050 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:37 crc kubenswrapper[4727]: W1210 14:31:37.760617 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:37 crc kubenswrapper[4727]: E1210 14:31:37.761157 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:37 crc kubenswrapper[4727]: W1210 14:31:37.821335 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:37 crc kubenswrapper[4727]: E1210 14:31:37.821428 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:37 crc kubenswrapper[4727]: E1210 14:31:37.898614 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="1.6s" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.136146 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.139171 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.139241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.139253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.139297 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:38 crc kubenswrapper[4727]: E1210 14:31:38.140121 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.402700 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.454970 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 14:31:38 crc kubenswrapper[4727]: E1210 14:31:38.456756 4727 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.581609 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b" exitCode=0 Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.581727 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.581727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b"} Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.583132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.583181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.583194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.585650 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.586148 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1" exitCode=0 Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.586232 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1"} Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.586256 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.586685 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.586720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.586731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.587441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.587497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.587517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.588496 4727 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877" exitCode=0 Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.588571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877"} Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.588573 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.589661 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.589706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.589727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.593728 4727 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee" exitCode=0 Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.593837 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.593859 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee"} Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.595181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.595221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.595232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.596875 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8c5861a48edbc265757ef603e83ebb8a092643fd4a59b0e574a7adffea2d8883"} Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.596953 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5f9760fae34882c2d667494ea8e6be195227d998f0a582f3bd6de65ddc122d32"} Dec 10 14:31:38 crc kubenswrapper[4727]: I1210 14:31:38.596972 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"27f54dcc8b8184353685144518200440ca7fe027ce457ae0cbb85f6bac6935fa"} Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.403321 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:39 crc kubenswrapper[4727]: E1210 14:31:39.499974 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="3.2s" Dec 10 14:31:39 crc kubenswrapper[4727]: W1210 14:31:39.544831 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:39 crc kubenswrapper[4727]: E1210 14:31:39.544958 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.613124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c41e15777c9568a8c26a8f6559505b878af76013b6c62ad70c8bde7eb2dab957"} Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.613267 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.614718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.614767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.614782 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.617317 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058"} Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.617385 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c"} Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.621123 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435" exitCode=0 Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.621297 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.621278 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435"} Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.623003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.623653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.623675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.624220 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01"} Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.624314 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.625894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.625971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.625984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.627922 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8"} Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.627995 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326"} Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.796698 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.956207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.956276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.956289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:39 crc kubenswrapper[4727]: I1210 14:31:39.956325 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:39 crc kubenswrapper[4727]: E1210 14:31:39.956856 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 10 14:31:40 crc kubenswrapper[4727]: W1210 14:31:40.036825 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:40 crc kubenswrapper[4727]: E1210 14:31:40.036951 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.225139 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.442289 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:40 crc kubenswrapper[4727]: W1210 14:31:40.510981 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:40 crc kubenswrapper[4727]: E1210 14:31:40.511080 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.640959 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82"} Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.641120 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.643466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.643543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.643560 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.645165 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a"} Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.647710 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c" exitCode=0 Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.647844 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.647851 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.648090 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.648210 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c"} Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.649112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.649159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.649172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.649224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.649240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.649182 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.649420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.649440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:40 crc kubenswrapper[4727]: I1210 14:31:40.649451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:40 crc kubenswrapper[4727]: W1210 14:31:40.948646 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:40 crc kubenswrapper[4727]: E1210 14:31:40.948730 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.403103 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.677446 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1b98c9344051460d6816961a20b019e913a4cd06eeb16b394e61d07d048e1014"} Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.677531 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e"} Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.677625 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.679131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.679163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.679175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.683578 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.683656 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.683966 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702"} Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.684008 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890"} Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.684027 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13"} Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.684048 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.684954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.685019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.685033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.684954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.685132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:41 crc kubenswrapper[4727]: I1210 14:31:41.685145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.368536 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.503738 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.544599 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.693130 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.696372 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1b98c9344051460d6816961a20b019e913a4cd06eeb16b394e61d07d048e1014" exitCode=255 Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.696446 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1b98c9344051460d6816961a20b019e913a4cd06eeb16b394e61d07d048e1014"} Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.696558 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.697857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.697895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.697937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.698829 4727 scope.go:117] "RemoveContainer" containerID="1b98c9344051460d6816961a20b019e913a4cd06eeb16b394e61d07d048e1014" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.700796 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85"} Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.700866 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.700892 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.700867 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48"} Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.702107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.702153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.702170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.702233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.702295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:42 crc kubenswrapper[4727]: I1210 14:31:42.702310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.157722 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.159764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.159822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.159836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.159879 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.224988 4727 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.225111 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.581525 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.581971 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.583976 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.584039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.584054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.706508 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.708561 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f"} Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.708616 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.708659 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.708686 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.709475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.709509 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.709522 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.709568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.709587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:43 crc kubenswrapper[4727]: I1210 14:31:43.709640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:44 crc kubenswrapper[4727]: I1210 14:31:44.711724 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:31:44 crc kubenswrapper[4727]: I1210 14:31:44.712398 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:44 crc kubenswrapper[4727]: I1210 14:31:44.713542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:44 crc kubenswrapper[4727]: I1210 14:31:44.713650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:44 crc kubenswrapper[4727]: I1210 14:31:44.713724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:44 crc kubenswrapper[4727]: I1210 14:31:44.803479 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:45 crc kubenswrapper[4727]: I1210 14:31:45.714773 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:45 crc kubenswrapper[4727]: I1210 14:31:45.716293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:45 crc kubenswrapper[4727]: I1210 14:31:45.716343 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:45 crc kubenswrapper[4727]: I1210 14:31:45.716362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:45 crc kubenswrapper[4727]: I1210 14:31:45.998580 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:45 crc kubenswrapper[4727]: I1210 14:31:45.998824 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.000293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.000334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.000348 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.350666 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.350958 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.352445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.352486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.352496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.447028 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:46 crc kubenswrapper[4727]: E1210 14:31:46.635422 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.717434 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.718658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.718736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:46 crc kubenswrapper[4727]: I1210 14:31:46.718752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.619231 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.619840 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.622590 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.622667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.622753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.630631 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.728707 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.729935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.729992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.730005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:48 crc kubenswrapper[4727]: I1210 14:31:48.749690 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:49 crc kubenswrapper[4727]: I1210 14:31:49.733760 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:49 crc kubenswrapper[4727]: I1210 14:31:49.735447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:49 crc kubenswrapper[4727]: I1210 14:31:49.735538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:49 crc kubenswrapper[4727]: I1210 14:31:49.735556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.055046 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.055509 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.057187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.057238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.057251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.097774 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.369646 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.740786 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.741999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.742053 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:51 crc kubenswrapper[4727]: I1210 14:31:51.742066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:52 crc kubenswrapper[4727]: I1210 14:31:52.404404 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 10 14:31:52 crc kubenswrapper[4727]: E1210 14:31:52.547616 4727 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 10 14:31:52 crc kubenswrapper[4727]: E1210 14:31:52.701829 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Dec 10 14:31:52 crc kubenswrapper[4727]: I1210 14:31:52.874490 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:52 crc kubenswrapper[4727]: I1210 14:31:52.876322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:52 crc kubenswrapper[4727]: I1210 14:31:52.876375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:52 crc kubenswrapper[4727]: I1210 14:31:52.876388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:53 crc kubenswrapper[4727]: E1210 14:31:53.213409 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 10 14:31:53 crc kubenswrapper[4727]: I1210 14:31:53.225458 4727 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:31:53 crc kubenswrapper[4727]: I1210 14:31:53.225650 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 14:31:55 crc kubenswrapper[4727]: E1210 14:31:55.289507 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187fe11da596adfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 14:31:36.400940542 +0000 UTC m=+0.595715104,LastTimestamp:2025-12-10 14:31:36.400940542 +0000 UTC m=+0.595715104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 14:31:55 crc kubenswrapper[4727]: W1210 14:31:55.736012 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 10 14:31:55 crc kubenswrapper[4727]: I1210 14:31:55.736674 4727 trace.go:236] Trace[693994215]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 14:31:45.734) (total time: 10002ms): Dec 10 14:31:55 crc kubenswrapper[4727]: Trace[693994215]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:31:55.735) Dec 10 14:31:55 crc kubenswrapper[4727]: Trace[693994215]: [10.002472538s] [10.002472538s] END Dec 10 14:31:55 crc kubenswrapper[4727]: E1210 14:31:55.736855 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 10 14:31:55 crc kubenswrapper[4727]: I1210 14:31:55.796854 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 14:31:55 crc kubenswrapper[4727]: I1210 14:31:55.797059 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 10 14:31:55 crc kubenswrapper[4727]: I1210 14:31:55.803363 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 14:31:55 crc kubenswrapper[4727]: I1210 14:31:55.803476 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 10 14:31:56 crc kubenswrapper[4727]: I1210 14:31:56.453139 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]log ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]etcd ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/generic-apiserver-start-informers ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/priority-and-fairness-filter ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-apiextensions-informers ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-apiextensions-controllers ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/crd-informer-synced ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-system-namespaces-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 10 14:31:56 crc kubenswrapper[4727]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 10 14:31:56 crc kubenswrapper[4727]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/bootstrap-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/start-kube-aggregator-informers ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/apiservice-registration-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/apiservice-discovery-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]autoregister-completion ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/apiservice-openapi-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 10 14:31:56 crc kubenswrapper[4727]: livez check failed Dec 10 14:31:56 crc kubenswrapper[4727]: I1210 14:31:56.453222 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:31:56 crc kubenswrapper[4727]: E1210 14:31:56.636222 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 14:31:59 crc kubenswrapper[4727]: I1210 14:31:59.613643 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:59 crc kubenswrapper[4727]: I1210 14:31:59.616080 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4727]: I1210 14:31:59.616153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4727]: I1210 14:31:59.616168 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4727]: I1210 14:31:59.616205 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:59 crc kubenswrapper[4727]: E1210 14:31:59.620698 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.801440 4727 trace.go:236] Trace[444494112]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 14:31:46.275) (total time: 14525ms): Dec 10 14:32:00 crc kubenswrapper[4727]: Trace[444494112]: ---"Objects listed" error: 14525ms (14:32:00.801) Dec 10 14:32:00 crc kubenswrapper[4727]: Trace[444494112]: [14.525849944s] [14.525849944s] END Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.801499 4727 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.801515 4727 trace.go:236] Trace[2018122335]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 14:31:45.878) (total time: 14922ms): Dec 10 14:32:00 crc kubenswrapper[4727]: Trace[2018122335]: ---"Objects listed" error: 14922ms (14:32:00.801) Dec 10 14:32:00 crc kubenswrapper[4727]: Trace[2018122335]: [14.922532459s] [14.922532459s] END Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.801563 4727 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.803564 4727 trace.go:236] Trace[1949241879]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 14:31:46.688) (total time: 14115ms): Dec 10 14:32:00 crc kubenswrapper[4727]: Trace[1949241879]: ---"Objects listed" error: 14114ms (14:32:00.803) Dec 10 14:32:00 crc kubenswrapper[4727]: Trace[1949241879]: [14.115047671s] [14.115047671s] END Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.803597 4727 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.806208 4727 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.839839 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39212->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.839839 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39226->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.839943 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39212->192.168.126.11:17697: read: connection reset by peer" Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.840006 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39226->192.168.126.11:17697: read: connection reset by peer" Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.887072 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.887440 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.888933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.889042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.889110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4727]: I1210 14:32:00.894779 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.206552 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.207638 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.210857 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f" exitCode=255 Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.211019 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.211114 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f"} Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.211319 4727 scope.go:117] "RemoveContainer" containerID="1b98c9344051460d6816961a20b019e913a4cd06eeb16b394e61d07d048e1014" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.211447 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.212460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.212496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.212510 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.212515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.212570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.212586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.213196 4727 scope.go:117] "RemoveContainer" containerID="d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f" Dec 10 14:32:01 crc kubenswrapper[4727]: E1210 14:32:01.213418 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.272555 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.288845 4727 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.454364 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.909767 4727 csr.go:261] certificate signing request csr-j4zd8 is approved, waiting to be issued Dec 10 14:32:01 crc kubenswrapper[4727]: I1210 14:32:01.930500 4727 csr.go:257] certificate signing request csr-j4zd8 is issued Dec 10 14:32:02 crc kubenswrapper[4727]: I1210 14:32:02.232083 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 10 14:32:02 crc kubenswrapper[4727]: I1210 14:32:02.238022 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:32:02 crc kubenswrapper[4727]: I1210 14:32:02.239956 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4727]: I1210 14:32:02.240023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4727]: I1210 14:32:02.240037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4727]: I1210 14:32:02.240973 4727 scope.go:117] "RemoveContainer" containerID="d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f" Dec 10 14:32:02 crc kubenswrapper[4727]: E1210 14:32:02.241216 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 10 14:32:02 crc kubenswrapper[4727]: I1210 14:32:02.249697 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:32:02 crc kubenswrapper[4727]: I1210 14:32:02.936584 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-10 14:27:01 +0000 UTC, rotation deadline is 2026-09-02 19:14:42.16747078 +0000 UTC Dec 10 14:32:02 crc kubenswrapper[4727]: I1210 14:32:02.936690 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6388h42m39.230790855s for next certificate rotation Dec 10 14:32:03 crc kubenswrapper[4727]: I1210 14:32:03.259490 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:32:03 crc kubenswrapper[4727]: I1210 14:32:03.267580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4727]: I1210 14:32:03.267640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4727]: I1210 14:32:03.267654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4727]: I1210 14:32:03.268464 4727 scope.go:117] "RemoveContainer" containerID="d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f" Dec 10 14:32:03 crc kubenswrapper[4727]: E1210 14:32:03.268669 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 10 14:32:04 crc kubenswrapper[4727]: I1210 14:32:04.836133 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:32:04 crc kubenswrapper[4727]: I1210 14:32:04.836382 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:32:04 crc kubenswrapper[4727]: I1210 14:32:04.838172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4727]: I1210 14:32:04.838229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4727]: I1210 14:32:04.838242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4727]: I1210 14:32:04.839072 4727 scope.go:117] "RemoveContainer" containerID="d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f" Dec 10 14:32:04 crc kubenswrapper[4727]: E1210 14:32:04.839256 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.277109 4727 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.621825 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.623588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.623645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.623656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.623776 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.632781 4727 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.633502 4727 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.633580 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.636386 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.638090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.638140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.638154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.638174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.638187 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.657567 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.662775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.662853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.662870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.662925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.662951 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.676216 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.682326 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.682386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.682401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.682425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.682439 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.694755 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.701021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.701074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.701084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.701104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4727]: I1210 14:32:06.701115 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.714044 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.714245 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.714305 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.814731 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:32:06 crc kubenswrapper[4727]: E1210 14:32:06.915706 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.016546 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.038587 4727 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.119733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.119784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.119794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.119812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.119826 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.223058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.223125 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.223136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.223154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.223165 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.326562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.326669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.326690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.326721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.326734 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.430431 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.430473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.430482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.430501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.430517 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.472865 4727 apiserver.go:52] "Watching apiserver" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.479308 4727 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.480278 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k8b7p","openshift-multus/multus-6ph7v","openshift-multus/multus-additional-cni-plugins-nn8cx","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-dns/node-resolver-pq9t7","openshift-image-registry/node-ca-cstpp","openshift-machine-config-operator/machine-config-daemon-5kj8v","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c"] Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.481517 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.482620 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.482674 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.482723 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.482781 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.482859 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.482890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.482669 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.483028 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.483157 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pq9t7" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.483196 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.483207 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.483245 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.484439 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.488111 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.488557 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.490927 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.491296 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.494315 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.494588 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.494820 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.496971 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.497363 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.497571 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.497646 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.498167 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.498337 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.498493 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.498932 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.499189 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.499592 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.500385 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.501755 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.502558 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.502599 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.502776 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.502800 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.502923 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.503095 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.503243 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.504106 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.504223 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.504478 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.504701 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.504802 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.510629 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.510644 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.510732 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.511208 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.511270 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.511951 4727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.527984 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528019 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528332 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528356 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528416 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528437 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528459 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528498 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528522 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528539 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528601 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528644 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528683 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528701 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528741 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528759 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528779 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528820 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528841 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528861 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528860 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528872 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.528914 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529055 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529103 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529129 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529128 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529278 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529345 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529336 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529375 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529404 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529423 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529443 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529418 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529462 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529485 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529509 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529456 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529528 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529552 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529576 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529596 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529612 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529656 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529622 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529680 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529815 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529821 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529858 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529882 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529943 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529962 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529982 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.529998 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530060 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530079 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530099 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530119 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530131 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530165 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530187 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530208 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530230 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530252 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530261 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530276 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530304 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530360 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530295 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530381 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530406 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530591 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530640 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530608 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530755 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530794 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530814 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530828 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530829 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530859 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531120 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531155 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531192 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531226 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531252 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531281 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531309 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531336 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531408 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531438 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531466 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531490 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531521 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531547 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531578 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531608 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531645 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531649 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531673 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531643 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.530362 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531706 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531737 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531768 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531794 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531822 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531851 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531886 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531944 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531963 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.531969 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532025 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532046 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532075 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532102 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532129 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532284 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532410 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532404 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532448 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532472 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532493 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532514 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532563 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532583 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532588 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532632 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532656 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532673 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532678 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532737 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532755 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532753 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532777 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532806 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532835 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532877 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532894 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532922 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532944 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532964 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.532987 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533006 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533024 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533044 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533068 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533093 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533118 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533142 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533195 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533222 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533296 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533333 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533367 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533402 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533429 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533461 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533488 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533514 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533533 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533557 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533580 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534257 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534284 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534312 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534349 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534378 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534422 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534444 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534468 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534489 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534513 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534876 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534988 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535024 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535045 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535070 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535098 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535128 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535152 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535173 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535206 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535226 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535352 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535374 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535395 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535416 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535435 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535461 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535480 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535502 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535523 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535720 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535742 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535764 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535784 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535801 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535821 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535841 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535867 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535887 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535926 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535970 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535999 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536025 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536047 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536067 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536087 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536104 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536121 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536137 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536153 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536172 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536207 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536226 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536309 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-env-overrides\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536361 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536381 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-openvswitch\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536422 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536440 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-kubelet\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-os-release\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536478 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536502 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2gk\" (UniqueName: \"kubernetes.io/projected/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-kube-api-access-xc2gk\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536682 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfjz\" (UniqueName: \"kubernetes.io/projected/7bd8788d-8022-4502-9181-8d4048712c30-kube-api-access-jdfjz\") pod \"node-resolver-pq9t7\" (UID: \"7bd8788d-8022-4502-9181-8d4048712c30\") " pod="openshift-dns/node-resolver-pq9t7" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536708 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj59p\" (UniqueName: \"kubernetes.io/projected/c84a1f7a-5938-4bec-9ff5-5033db566f4d-kube-api-access-nj59p\") pod \"node-ca-cstpp\" (UID: \"c84a1f7a-5938-4bec-9ff5-5033db566f4d\") " pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536773 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-ovn\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536792 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovn-node-metrics-cert\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536815 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536837 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536868 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fe1deb75-3aeb-4657-a335-fd4c02a2a513-rootfs\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536891 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe1deb75-3aeb-4657-a335-fd4c02a2a513-mcd-auth-proxy-config\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536985 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw98j\" (UniqueName: \"kubernetes.io/projected/fe1deb75-3aeb-4657-a335-fd4c02a2a513-kube-api-access-xw98j\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537016 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537033 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-systemd-units\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537052 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-bin\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537069 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-netd\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537186 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537242 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-var-lib-cni-multus\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537262 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-etc-kubernetes\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjvwb\" (UniqueName: \"kubernetes.io/projected/c724a700-1960-4452-9106-d71685d1b38c-kube-api-access-bjvwb\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537344 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537363 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537382 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c84a1f7a-5938-4bec-9ff5-5033db566f4d-host\") pod \"node-ca-cstpp\" (UID: \"c84a1f7a-5938-4bec-9ff5-5033db566f4d\") " pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537403 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-cnibin\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537418 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-run-k8s-cni-cncf-io\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537437 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-var-lib-kubelet\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537457 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-etc-openvswitch\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537478 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7bd8788d-8022-4502-9181-8d4048712c30-hosts-file\") pod \"node-resolver-pq9t7\" (UID: \"7bd8788d-8022-4502-9181-8d4048712c30\") " pod="openshift-dns/node-resolver-pq9t7" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537497 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe1deb75-3aeb-4657-a335-fd4c02a2a513-proxy-tls\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537514 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-run-netns\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537534 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-var-lib-cni-bin\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537565 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-slash\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-log-socket\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537609 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-config\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537651 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-script-lib\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537670 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsrrl\" (UniqueName: \"kubernetes.io/projected/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-kube-api-access-lsrrl\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537686 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-os-release\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537702 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-hostroot\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537717 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c724a700-1960-4452-9106-d71685d1b38c-cni-binary-copy\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537733 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-systemd\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537749 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537768 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537789 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537811 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-system-cni-dir\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537829 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-run-multus-certs\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537846 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-var-lib-openvswitch\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-node-log\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537878 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-multus-cni-dir\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537920 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-multus-socket-dir-parent\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537954 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c724a700-1960-4452-9106-d71685d1b38c-multus-daemon-config\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538020 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-multus-conf-dir\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538040 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-ovn-kubernetes\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538056 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-cnibin\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538079 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c84a1f7a-5938-4bec-9ff5-5033db566f4d-serviceca\") pod \"node-ca-cstpp\" (UID: \"c84a1f7a-5938-4bec-9ff5-5033db566f4d\") " pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538237 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-system-cni-dir\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538260 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-netns\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538282 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538369 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538383 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538396 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538407 4727 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538419 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538432 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538443 4727 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538455 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538466 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538479 4727 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538491 4727 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538504 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538515 4727 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538527 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538546 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538567 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538583 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538597 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538610 4727 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538622 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538632 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538643 4727 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538655 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538667 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.540739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.540775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.540788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.540811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.540826 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.550356 4727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.550453 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.559210 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.560164 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.565735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.638918 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.639899 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.640403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.640465 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533333 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533379 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533425 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533480 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.533871 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534429 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534560 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534638 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.534968 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535356 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535829 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535952 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535968 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.535980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536010 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.646678 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536538 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536687 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.536952 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537027 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537045 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537283 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537313 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537455 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.537741 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.646781 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.646819 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538481 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.539533 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.539776 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.540206 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.540475 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.540702 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.542646 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.543430 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.544057 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.544374 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.544417 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.544777 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.544988 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.545175 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.545195 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.545295 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.545409 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.545445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.545845 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.546831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.547374 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.543776 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.547508 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.547987 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.548058 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.548400 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.548496 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.548773 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.549318 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.549373 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.549414 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.549534 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.549768 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.549880 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.550123 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.550592 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.550624 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.550881 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.550956 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.551233 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.551256 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.551451 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.551627 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.552389 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.552733 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.553131 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.553470 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.553594 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.554056 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.554584 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.554722 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.554786 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.555141 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.555449 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.555533 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.555590 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.555920 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.555965 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.556497 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.556751 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.556836 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.556972 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.557183 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.557262 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.557185 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.557444 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.557941 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.557761 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.557998 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.558712 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.558832 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.558961 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.559667 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.559847 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.560149 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.560238 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.560331 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.560519 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:08.060484441 +0000 UTC m=+32.255259143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.562840 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.569183 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.573443 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.577073 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.578510 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.647350 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.647369 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.580965 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.581033 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.581318 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.595988 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.647363 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.638379 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.639034 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.639168 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.639373 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.639549 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.639586 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.639725 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.647542 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.647563 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.639725 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.641040 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.641143 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.641188 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.641636 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.641748 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.642021 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.642371 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.642670 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.642696 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.643443 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.643809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.643140 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.644087 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.643975 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.644169 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.644461 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.644615 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.645197 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.645261 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.645232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.645501 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.645851 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.646244 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.646561 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.646973 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.647113 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.538317 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.648018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.648478 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.648664 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.649570 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.649710 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.649749 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.650099 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.650378 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.650385 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.650631 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.650885 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.650973 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.651222 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.651531 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.651746 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.652487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.652533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.652548 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.652577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.652592 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.655089 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.656144 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.657388 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.657730 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.659756 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:08.159710798 +0000 UTC m=+32.354485340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.660034 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:08.160000624 +0000 UTC m=+32.354775166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.660226 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:08.160201419 +0000 UTC m=+32.354976031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.660576 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: E1210 14:32:07.660479 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:08.160463244 +0000 UTC m=+32.355237786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.662042 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.662174 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665465 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665525 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665556 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.665558 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~secret/v4-0-config-system-ocp-branding-template Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665590 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665597 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665633 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665742 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665775 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.665781 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~secret/image-registry-operator-tls Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665802 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665801 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665855 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.665919 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665935 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665939 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.665978 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.666786 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.665981 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~projected/kube-api-access-mg5zb Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.666846 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.666057 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~secret/console-oauth-config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.666928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.666019 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes/kubernetes.io~configmap/ovnkube-config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.666955 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.666108 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes/kubernetes.io~projected/kube-api-access-x2m85 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.666987 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667024 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.666989 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667121 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes/kubernetes.io~projected/kube-api-access-xcgwh Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667219 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667215 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667315 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667244 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667353 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667343 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667378 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~projected/kube-api-access-9xfj7 Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667391 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667394 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.666168 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~projected/kube-api-access-rnphk Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667440 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667449 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667465 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667469 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~projected/kube-api-access-2d4wz Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667270 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667487 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667513 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667527 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667208 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667537 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.666193 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~projected/kube-api-access-w7l8j Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667569 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.666401 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes/kubernetes.io~projected/kube-api-access-x7zkh Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667597 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.666474 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667613 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667617 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~projected/bound-sa-token Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.666577 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes/kubernetes.io~secret/certs Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.666595 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667319 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes/kubernetes.io~configmap/env-overrides Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667644 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667656 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667330 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667536 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667705 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667404 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667394 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667742 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667799 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667470 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667823 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~secret/webhook-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667830 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667622 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes/kubernetes.io~secret/srv-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667857 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667622 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667884 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667680 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667843 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667899 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667930 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.667944 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~projected/kube-api-access-wxkg8 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667964 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667977 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.667998 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.668057 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~projected/kube-api-access-zkvpv Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668073 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668062 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.668097 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~projected/kube-api-access-qs4fp Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668108 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668137 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668158 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.668169 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~projected/kube-api-access-qg5z5 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668177 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668180 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668206 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668224 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.668228 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes/kubernetes.io~projected/kube-api-access-fcqwp Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668236 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668271 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.668283 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~projected/kube-api-access-x4zgh Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668318 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668339 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668360 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.668290 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.669030 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.669039 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.669087 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~projected/kube-api-access-zgdk5 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.669094 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.669139 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes/kubernetes.io~projected/kube-api-access-w9rds Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.669145 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.669186 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes/kubernetes.io~projected/kube-api-access-2w9zh Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.669192 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.669235 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.669241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.669282 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.669290 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.669331 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~projected/kube-api-access-w4xd4 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.669338 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.669382 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.669388 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.669598 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes/kubernetes.io~configmap/cni-binary-copy Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.669637 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.671843 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.671883 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.671953 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.671988 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672009 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672035 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672055 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672129 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672153 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672175 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672198 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672240 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672261 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672289 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672310 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672330 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672361 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672336 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes/kubernetes.io~secret/webhook-certs Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672406 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672412 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672457 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~secret/v4-0-config-user-template-login Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672478 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes/kubernetes.io~secret/proxy-tls Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672533 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~secret/stats-auth Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672458 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672574 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~secret/encryption-config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672484 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672573 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672569 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672665 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~secret/etcd-client Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672718 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672729 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~secret/proxy-tls Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672732 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes/kubernetes.io~projected/kube-api-access-pjr6v Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672537 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672746 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672759 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672739 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~projected/kube-api-access-kfwg7 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672332 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672791 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672868 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672995 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673043 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672338 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes/kubernetes.io~secret/node-bootstrap-token Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673065 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673073 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673101 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673130 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673152 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673175 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673203 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673221 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673243 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672796 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672818 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~secret/v4-0-config-user-template-error Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673312 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.672384 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes/kubernetes.io~projected/kube-api-access-4d4hj Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673339 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673160 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes/kubernetes.io~secret/samples-operator-tls Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673374 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673382 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673233 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~secret/v4-0-config-system-serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673426 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673290 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~projected/kube-api-access-xcphl Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673449 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673442 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes/kubernetes.io~secret/metrics-certs Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673484 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~projected/kube-api-access-8tdtz Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673500 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673497 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~secret/metrics-certs Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673518 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes/kubernetes.io~projected/kube-api-access-nzwt7 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673550 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673533 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673519 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673574 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673589 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673401 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673645 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673677 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~projected/kube-api-access-d4lsv Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673684 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673690 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673697 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673675 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673729 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673729 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673746 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673742 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673754 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673792 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673798 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673780 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673817 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673750 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673848 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~secret/v4-0-config-system-session Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673858 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673859 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673888 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673891 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673919 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.673936 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~secret/v4-0-config-system-router-certs Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673941 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673948 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673966 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.673992 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674008 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes/kubernetes.io~secret/metrics-tls Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674017 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674020 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674045 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674069 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes/kubernetes.io~configmap/mcd-auth-proxy-config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674082 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674072 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674122 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674128 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~projected/kube-api-access-d6qdx Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674143 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674168 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674194 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674216 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674239 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674317 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674339 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674358 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674397 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674419 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674436 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674475 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674496 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674516 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674541 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674584 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674606 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674627 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674695 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674772 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674145 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674185 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674892 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674893 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674929 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674944 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~secret/machine-api-operator-tls Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674960 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674980 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.674995 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675027 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675041 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675046 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675062 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~projected/kube-api-access-lzf88 Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675088 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675102 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675058 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674262 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~secret/proxy-tls Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675138 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~secret/metrics-tls Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675146 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.672454 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675156 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675213 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675152 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675224 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675239 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674673 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675243 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675265 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674725 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675281 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes/kubernetes.io~projected/kube-api-access-dbsvg Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675298 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674735 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675315 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes/kubernetes.io~projected/kube-api-access-jhbk2 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675318 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675303 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675331 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675427 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675990 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676031 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676067 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676106 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675318 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676140 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676179 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674786 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.676270 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes/kubernetes.io~secret/profile-collector-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676286 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676272 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674816 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.676314 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676323 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674855 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes/kubernetes.io~projected/kube-api-access-vt5rc Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676347 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676362 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-kubelet\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676330 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674851 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~projected/kube-api-access Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676407 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674854 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~projected/kube-api-access Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676424 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676473 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-openvswitch\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676583 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-openvswitch\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676606 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-kubelet\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676650 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674858 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676682 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675065 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~projected/kube-api-access-279lb Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676706 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674221 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676721 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675175 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676739 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675213 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676758 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674776 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676774 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675365 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676788 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675374 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676871 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-os-release\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676945 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676876 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.675434 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675504 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~secret/serving-cert Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.675598 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.676128 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.676994 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.676200 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.677008 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-ovn\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.677090 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.677098 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-ovn\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.677011 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.676250 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.677374 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.674787 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.677400 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.676928 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.677417 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.677036 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.677007 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678078 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2gk\" (UniqueName: \"kubernetes.io/projected/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-kube-api-access-xc2gk\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678222 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678273 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfjz\" (UniqueName: \"kubernetes.io/projected/7bd8788d-8022-4502-9181-8d4048712c30-kube-api-access-jdfjz\") pod \"node-resolver-pq9t7\" (UID: \"7bd8788d-8022-4502-9181-8d4048712c30\") " pod="openshift-dns/node-resolver-pq9t7" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678360 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-os-release\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678417 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj59p\" (UniqueName: \"kubernetes.io/projected/c84a1f7a-5938-4bec-9ff5-5033db566f4d-kube-api-access-nj59p\") pod \"node-ca-cstpp\" (UID: \"c84a1f7a-5938-4bec-9ff5-5033db566f4d\") " pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678567 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw98j\" (UniqueName: \"kubernetes.io/projected/fe1deb75-3aeb-4657-a335-fd4c02a2a513-kube-api-access-xw98j\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678600 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovn-node-metrics-cert\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678772 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fe1deb75-3aeb-4657-a335-fd4c02a2a513-rootfs\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678802 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678833 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fe1deb75-3aeb-4657-a335-fd4c02a2a513-rootfs\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe1deb75-3aeb-4657-a335-fd4c02a2a513-mcd-auth-proxy-config\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.678976 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-systemd-units\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679007 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-bin\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679029 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-netd\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679065 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c84a1f7a-5938-4bec-9ff5-5033db566f4d-host\") pod \"node-ca-cstpp\" (UID: \"c84a1f7a-5938-4bec-9ff5-5033db566f4d\") " pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679115 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-bin\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-var-lib-cni-multus\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-netd\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679247 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c84a1f7a-5938-4bec-9ff5-5033db566f4d-host\") pod \"node-ca-cstpp\" (UID: \"c84a1f7a-5938-4bec-9ff5-5033db566f4d\") " pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679271 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-var-lib-cni-multus\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.679296 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-systemd-units\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.680676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-etc-kubernetes\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681453 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-etc-kubernetes\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681486 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe1deb75-3aeb-4657-a335-fd4c02a2a513-mcd-auth-proxy-config\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681499 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvwb\" (UniqueName: \"kubernetes.io/projected/c724a700-1960-4452-9106-d71685d1b38c-kube-api-access-bjvwb\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681553 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681613 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-var-lib-kubelet\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681645 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-etc-openvswitch\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-cnibin\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681706 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-run-k8s-cni-cncf-io\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681731 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe1deb75-3aeb-4657-a335-fd4c02a2a513-proxy-tls\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681762 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7bd8788d-8022-4502-9181-8d4048712c30-hosts-file\") pod \"node-resolver-pq9t7\" (UID: \"7bd8788d-8022-4502-9181-8d4048712c30\") " pod="openshift-dns/node-resolver-pq9t7" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681787 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-run-netns\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681810 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-var-lib-cni-bin\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681836 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-slash\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681858 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-log-socket\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681881 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-hostroot\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681927 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-config\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.681963 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-script-lib\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsrrl\" (UniqueName: \"kubernetes.io/projected/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-kube-api-access-lsrrl\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-os-release\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682082 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c724a700-1960-4452-9106-d71685d1b38c-cni-binary-copy\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682109 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-systemd\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682135 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682164 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-system-cni-dir\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682252 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-run-multus-certs\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682307 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-log-socket\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682353 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-var-lib-kubelet\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682355 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-var-lib-openvswitch\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682383 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-etc-openvswitch\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682426 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-cnibin\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682451 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-run-k8s-cni-cncf-io\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-node-log\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682505 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-multus-cni-dir\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682536 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-multus-socket-dir-parent\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682563 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c724a700-1960-4452-9106-d71685d1b38c-multus-daemon-config\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682595 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-multus-conf-dir\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682813 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-ovn-kubernetes\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-cnibin\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683041 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c84a1f7a-5938-4bec-9ff5-5033db566f4d-serviceca\") pod \"node-ca-cstpp\" (UID: \"c84a1f7a-5938-4bec-9ff5-5033db566f4d\") " pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683077 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-hostroot\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683086 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-system-cni-dir\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-netns\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683155 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-env-overrides\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683343 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7bd8788d-8022-4502-9181-8d4048712c30-hosts-file\") pod \"node-resolver-pq9t7\" (UID: \"7bd8788d-8022-4502-9181-8d4048712c30\") " pod="openshift-dns/node-resolver-pq9t7" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683353 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683377 4727 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-run-netns\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683397 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683415 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683420 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-var-lib-cni-bin\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683430 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683446 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-slash\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683460 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683477 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683505 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-os-release\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683492 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683551 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683566 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683579 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683592 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683597 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-config\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683605 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683645 4727 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683686 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683698 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683711 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683724 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683741 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683752 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683765 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683777 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683787 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683797 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683808 4727 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683818 4727 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683828 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683839 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683849 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683863 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683874 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683884 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683895 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.683923 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684024 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684062 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-script-lib\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684091 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-system-cni-dir\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684143 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-multus-conf-dir\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684179 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-ovn-kubernetes\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-cnibin\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.682374 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684533 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-env-overrides\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684586 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-host-run-multus-certs\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-var-lib-openvswitch\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-system-cni-dir\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684699 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-netns\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684763 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-node-log\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684780 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c724a700-1960-4452-9106-d71685d1b38c-cni-binary-copy\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-systemd\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684862 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684875 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684886 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684917 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684930 4727 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684944 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684957 4727 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684967 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684978 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.684990 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685001 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685035 4727 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685045 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685055 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-multus-cni-dir\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685068 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685095 4727 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685117 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685130 4727 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685143 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685157 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685174 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685186 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685198 4727 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685211 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685226 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685238 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685251 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685264 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685277 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685288 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685300 4727 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685320 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685331 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685342 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685353 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685367 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685380 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685395 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685409 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685421 4727 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685435 4727 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685448 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685460 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685494 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685507 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685504 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c724a700-1960-4452-9106-d71685d1b38c-multus-daemon-config\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685521 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685534 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685593 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685612 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c724a700-1960-4452-9106-d71685d1b38c-multus-socket-dir-parent\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685607 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c84a1f7a-5938-4bec-9ff5-5033db566f4d-serviceca\") pod \"node-ca-cstpp\" (UID: \"c84a1f7a-5938-4bec-9ff5-5033db566f4d\") " pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685626 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685747 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685769 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685784 4727 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685797 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685838 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685854 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685867 4727 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685881 4727 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685938 4727 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685948 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685958 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685969 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.685980 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686017 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686027 4727 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686037 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686049 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686059 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686093 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686112 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686131 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686183 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686203 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686214 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686224 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686331 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686387 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686932 4727 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.686947 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687013 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687024 4727 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687035 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687047 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687152 4727 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687196 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687220 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687254 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687287 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687297 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687310 4727 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687324 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687362 4727 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687233 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687374 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe1deb75-3aeb-4657-a335-fd4c02a2a513-proxy-tls\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.687956 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696062 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696082 4727 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696097 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696107 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696117 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696128 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696139 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696153 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696163 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696172 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696182 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696192 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696203 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696213 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696223 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696232 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696245 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696255 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696266 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696275 4727 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696287 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696298 4727 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696307 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696316 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696327 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696337 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696345 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696354 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696368 4727 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696380 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696390 4727 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696400 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696410 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696419 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696428 4727 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696437 4727 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696446 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696455 4727 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696464 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696474 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696485 4727 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.696495 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.698870 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovn-node-metrics-cert\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.700574 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsrrl\" (UniqueName: \"kubernetes.io/projected/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-kube-api-access-lsrrl\") pod \"ovnkube-node-k8b7p\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.701083 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfjz\" (UniqueName: \"kubernetes.io/projected/7bd8788d-8022-4502-9181-8d4048712c30-kube-api-access-jdfjz\") pod \"node-resolver-pq9t7\" (UID: \"7bd8788d-8022-4502-9181-8d4048712c30\") " pod="openshift-dns/node-resolver-pq9t7" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.701456 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj59p\" (UniqueName: \"kubernetes.io/projected/c84a1f7a-5938-4bec-9ff5-5033db566f4d-kube-api-access-nj59p\") pod \"node-ca-cstpp\" (UID: \"c84a1f7a-5938-4bec-9ff5-5033db566f4d\") " pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.702364 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvwb\" (UniqueName: \"kubernetes.io/projected/c724a700-1960-4452-9106-d71685d1b38c-kube-api-access-bjvwb\") pod \"multus-6ph7v\" (UID: \"c724a700-1960-4452-9106-d71685d1b38c\") " pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.703173 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.705352 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw98j\" (UniqueName: \"kubernetes.io/projected/fe1deb75-3aeb-4657-a335-fd4c02a2a513-kube-api-access-xw98j\") pod \"machine-config-daemon-5kj8v\" (UID: \"fe1deb75-3aeb-4657-a335-fd4c02a2a513\") " pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.705575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pq9t7" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.705735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2gk\" (UniqueName: \"kubernetes.io/projected/8e83cbea-272d-4fcf-a39b-f2a60adbfb9d-kube-api-access-xc2gk\") pod \"multus-additional-cni-plugins-nn8cx\" (UID: \"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\") " pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.713335 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6ph7v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.713922 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.723120 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.727144 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.733757 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc724a700_1960_4452_9106_d71685d1b38c.slice/crio-fc845e693509bea0f65c6e0ab0b95c7b152a52cf4b5ec9753b280a11f21cd8e8 WatchSource:0}: Error finding container fc845e693509bea0f65c6e0ab0b95c7b152a52cf4b5ec9753b280a11f21cd8e8: Status 404 returned error can't find the container with id fc845e693509bea0f65c6e0ab0b95c7b152a52cf4b5ec9753b280a11f21cd8e8 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.736382 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.743962 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.756630 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.763260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.763319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.763333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.763355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.763368 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.767558 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.778931 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.789781 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe1deb75_3aeb_4657_a335_fd4c02a2a513.slice/crio-75d59a40754f79b996d66576f23a2ea0c4fede8ce292a1911036275ff21709d9 WatchSource:0}: Error finding container 75d59a40754f79b996d66576f23a2ea0c4fede8ce292a1911036275ff21709d9: Status 404 returned error can't find the container with id 75d59a40754f79b996d66576f23a2ea0c4fede8ce292a1911036275ff21709d9 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.822870 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.839568 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:07 crc kubenswrapper[4727]: W1210 14:32:07.846390 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0c6139c2374f5dffd1eeb07447b9c06ba8f96be8b0618cd43bb8bf8d2a0b8b26 WatchSource:0}: Error finding container 0c6139c2374f5dffd1eeb07447b9c06ba8f96be8b0618cd43bb8bf8d2a0b8b26: Status 404 returned error can't find the container with id 0c6139c2374f5dffd1eeb07447b9c06ba8f96be8b0618cd43bb8bf8d2a0b8b26 Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.856389 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cstpp" Dec 10 14:32:07 crc kubenswrapper[4727]: I1210 14:32:07.963437 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.100384 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.100634 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:09.100604609 +0000 UTC m=+33.295379161 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.202230 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.202281 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.202309 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.202333 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202447 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202514 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:09.202496955 +0000 UTC m=+33.397271497 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202569 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202725 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:09.202699519 +0000 UTC m=+33.397474061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202724 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202788 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202812 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202627 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202927 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202944 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202958 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:09.202877313 +0000 UTC m=+33.397651885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.202995 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:09.202972285 +0000 UTC m=+33.397746827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.686732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.686773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.686781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.686797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.686807 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.700823 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:08 crc kubenswrapper[4727]: E1210 14:32:08.700970 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.731392 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.732181 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.734277 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.735387 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.737269 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.738621 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.740260 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.741940 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.742801 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.744298 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.744985 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.746485 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.748144 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.749425 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.751376 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.752633 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.753859 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.755384 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.756449 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.757992 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: W1210 14:32:08.758114 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc84a1f7a_5938_4bec_9ff5_5033db566f4d.slice/crio-18d58215b980ae99013673704b32edc9669e79cc0ae8d6d79f12d9a01cf50c5b WatchSource:0}: Error finding container 18d58215b980ae99013673704b32edc9669e79cc0ae8d6d79f12d9a01cf50c5b: Status 404 returned error can't find the container with id 18d58215b980ae99013673704b32edc9669e79cc0ae8d6d79f12d9a01cf50c5b Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.758663 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: W1210 14:32:08.759670 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c6991f484f4f4c54a8eeafa3a6e0131a0f9d58935e9e3592089108fa54d67ca9 WatchSource:0}: Error finding container c6991f484f4f4c54a8eeafa3a6e0131a0f9d58935e9e3592089108fa54d67ca9: Status 404 returned error can't find the container with id c6991f484f4f4c54a8eeafa3a6e0131a0f9d58935e9e3592089108fa54d67ca9 Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.760639 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.761521 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.764277 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.765290 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.766838 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.767816 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.768943 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.769576 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.770558 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.771269 4727 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.771435 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.774717 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.775341 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.775764 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.778044 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.779206 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.779731 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.780883 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.781718 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.782887 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.783779 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.785563 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.786293 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.787335 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.787881 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.788755 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.789643 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.790802 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.791598 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.792114 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.793042 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.793691 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.794765 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.795539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pq9t7" event={"ID":"7bd8788d-8022-4502-9181-8d4048712c30","Type":"ContainerStarted","Data":"c096ee06bdcc6ac2b0478e55b99b0c50ecf135cddc7ca9c958edf44634ab1e16"} Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.795589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3976023fa86592ae8a1d36760d494e429f77bc1c0cba716d192fbc87b17184a2"} Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.795634 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6ph7v" event={"ID":"c724a700-1960-4452-9106-d71685d1b38c","Type":"ContainerStarted","Data":"fc845e693509bea0f65c6e0ab0b95c7b152a52cf4b5ec9753b280a11f21cd8e8"} Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.795649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0c6139c2374f5dffd1eeb07447b9c06ba8f96be8b0618cd43bb8bf8d2a0b8b26"} Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.795663 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"75d59a40754f79b996d66576f23a2ea0c4fede8ce292a1911036275ff21709d9"} Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.795675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerStarted","Data":"2e37eed9bb5c9bcc7fcc04d9f7901ff796d8997574278a6ce66067a549542848"} Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.799740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.799792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.799805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.799830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.799843 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.903123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.903170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.903181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.903213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4727]: I1210 14:32:08.903225 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.007086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.007141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.007153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.007173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.007188 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.110630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.111407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.111455 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.111487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.111503 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.134114 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.134379 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:11.134335455 +0000 UTC m=+35.329110017 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.215305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.216144 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.216178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.216204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.216217 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.237180 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.237331 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.237417 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.237621 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.237669 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.237693 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.237765 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:11.237733344 +0000 UTC m=+35.432507886 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.238013 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.238212 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:11.238164173 +0000 UTC m=+35.432938715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.237483 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.238363 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.238421 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:11.238411029 +0000 UTC m=+35.433185571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.238555 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.238601 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.238627 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.238702 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:11.238677385 +0000 UTC m=+35.433451927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.319122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.319194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.319207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.319226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.319258 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.422210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.422269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.422279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.422318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.422337 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.526260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.526325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.526344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.526364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.526381 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.562060 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.562102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.562244 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:09 crc kubenswrapper[4727]: E1210 14:32:09.562433 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.629278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.629323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.629342 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.629364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.629377 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.756699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.756746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.756756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.756778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.756793 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.758369 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6ph7v" event={"ID":"c724a700-1960-4452-9106-d71685d1b38c","Type":"ContainerStarted","Data":"9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.760697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pq9t7" event={"ID":"7bd8788d-8022-4502-9181-8d4048712c30","Type":"ContainerStarted","Data":"511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.762298 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerStarted","Data":"2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.764304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cstpp" event={"ID":"c84a1f7a-5938-4bec-9ff5-5033db566f4d","Type":"ContainerStarted","Data":"0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.764384 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cstpp" event={"ID":"c84a1f7a-5938-4bec-9ff5-5033db566f4d","Type":"ContainerStarted","Data":"18d58215b980ae99013673704b32edc9669e79cc0ae8d6d79f12d9a01cf50c5b"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.766078 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.766112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c6991f484f4f4c54a8eeafa3a6e0131a0f9d58935e9e3592089108fa54d67ca9"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.767753 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.767800 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"06e2c5a0c013037c04a2af7b1fedb0309e98cc33b0a01350ef0b081de19dfba7"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.769589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.771083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.860218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.860287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.860300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.860325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.860337 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.981738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.981822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.981838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.981866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4727]: I1210 14:32:09.981878 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.101775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.101819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.101829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.101849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.101862 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.206050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.206081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.206090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.206104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.206114 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.308462 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.308494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.308504 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.308519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.308531 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.410971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.411014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.411035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.411056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.411067 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.513810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.513868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.513880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.513930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.513943 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.563054 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:10 crc kubenswrapper[4727]: E1210 14:32:10.563346 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.627841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.627897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.627941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.627963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.627974 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.730582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.730623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.730634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.730653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.730665 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.784512 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.786573 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706" exitCode=0 Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.786692 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.790144 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.835010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.835064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.835073 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.835090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.835101 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.847329 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.858485 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.872043 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.883620 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.900939 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.918591 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.934607 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.937745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.937805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.937816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.937836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.937849 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.949701 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.969797 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.979622 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:10 crc kubenswrapper[4727]: I1210 14:32:10.992365 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.002261 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.016548 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.029054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.039979 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.041172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.041206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.041216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.041233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.041247 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.056209 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.066611 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.192540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.192574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.192583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.192600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.192610 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.192703 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.213122 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.229002 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.231343 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.231639 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:15.231608296 +0000 UTC m=+39.426382838 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.246192 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.287186 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.297180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.297286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.297341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.297378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.297433 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.303376 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.321247 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.333062 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.333295 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.333417 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:15.333392729 +0000 UTC m=+39.528167271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.333523 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.333602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.333652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.333729 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.333779 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:15.333764907 +0000 UTC m=+39.528539449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.333973 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.333995 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.334257 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.334281 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.334291 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.334344 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:15.33432888 +0000 UTC m=+39.529103422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.334373 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.334408 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:15.334395861 +0000 UTC m=+39.529170403 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.401165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.401224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.401234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.401251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.401264 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.504045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.504094 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.504106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.504123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.504136 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.563137 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.563154 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.563342 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:11 crc kubenswrapper[4727]: E1210 14:32:11.563625 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.609658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.609738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.609763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.609798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.609824 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.756947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.757032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.757077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.757102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.757118 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.804851 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7"} Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.815446 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.833706 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.863351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.863406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.863418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.863436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.863450 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.873422 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.899946 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.908027 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.935783 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:11 crc kubenswrapper[4727]: I1210 14:32:11.961431 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.153189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.153232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.153244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.153265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.153277 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.221026 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.251268 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.256085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.256195 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.256210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.256233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.256248 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.266137 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.282721 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.302640 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.335478 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.419484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.419549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.419564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.419590 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.419609 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.425251 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.440593 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.456805 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.478118 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.597415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:12 crc kubenswrapper[4727]: E1210 14:32:12.597603 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.609673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.609723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.609733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.609754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.609771 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.681016 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.830806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.830894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.830965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.831018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.831055 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.835002 4727 generic.go:334] "Generic (PLEG): container finished" podID="8e83cbea-272d-4fcf-a39b-f2a60adbfb9d" containerID="2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70" exitCode=0 Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.835145 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerDied","Data":"2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70"} Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.837879 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.838967 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.838999 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.853624 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.886229 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.912973 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.944867 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4727]: I1210 14:32:12.981575 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.000034 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.015435 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.021019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.021088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.021101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.021127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.021141 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.040080 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.072853 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.088678 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.112943 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.124529 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.124571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.124579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.124596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.124610 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.133715 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.146665 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.158619 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.170533 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.194503 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.212293 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:13Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.235011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.235063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.235077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.235100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.235116 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.337844 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.337877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.337886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.337919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.337930 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.440431 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.440476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.440488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.440507 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.440521 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.543486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.543540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.543553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.543572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.543586 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.563025 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.563098 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:13 crc kubenswrapper[4727]: E1210 14:32:13.563306 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:13 crc kubenswrapper[4727]: E1210 14:32:13.567210 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.646701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.646740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.646749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.646766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.646776 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.769618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.769680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.769722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.769750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.769768 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.873161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.873206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.873218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.873238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.873257 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.977543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.977594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.977604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.977621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4727]: I1210 14:32:13.977633 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.081205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.081377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.081405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.081438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.081462 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.202423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.202485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.202504 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.202531 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.202551 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.306345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.306435 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.306453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.306488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.306534 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.409625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.409702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.409722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.409753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.409773 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.512836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.512935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.512954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.512987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.513008 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.562702 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:14 crc kubenswrapper[4727]: E1210 14:32:14.563093 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.619211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.619258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.619463 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.619481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.619491 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.722534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.722575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.722585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.722602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.722613 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.825867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.825970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.825989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.826011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.826028 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.929864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.929979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.930035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.930084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4727]: I1210 14:32:14.930106 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.032618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.032649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.032657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.032673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.032682 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.136252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.136345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.136368 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.136402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.136446 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.239586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.239650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.239667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.239692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.239714 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.254612 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.254895 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:23.254865566 +0000 UTC m=+47.449640148 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.361355 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.361433 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.361471 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.361520 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.361735 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.361764 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.361780 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.361852 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:23.361828234 +0000 UTC m=+47.556602786 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.361948 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.361994 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:23.361979997 +0000 UTC m=+47.556754549 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.362098 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.362166 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:23.362148571 +0000 UTC m=+47.556923123 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.362246 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.362267 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.362281 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.362323 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:23.362305004 +0000 UTC m=+47.557079556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.436297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.436344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.436359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.436379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.436393 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.539466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.539810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.539827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.539852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.539868 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.562979 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.563116 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.563178 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:15 crc kubenswrapper[4727]: E1210 14:32:15.563270 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.643507 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.643547 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.643559 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.643576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.643587 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.749178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.749280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.749306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.749345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.749373 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.853497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.853581 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.853604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.853627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.853648 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.859881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerStarted","Data":"336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.863009 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.957708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.957776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.957794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.957886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4727]: I1210 14:32:15.957932 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.208140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.208198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.208208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.208233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.208245 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.334619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.334673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.334684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.334711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.334722 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.438333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.438407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.438434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.438451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.438465 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.566495 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:16 crc kubenswrapper[4727]: E1210 14:32:16.566919 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.569799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.569869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.569878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.569897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.569972 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.610740 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.615525 4727 scope.go:117] "RemoveContainer" containerID="d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.616603 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.639496 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.686480 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.710801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.710834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.710843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.710861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.710871 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.725501 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.741858 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.762665 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.777594 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.806163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.806192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.806201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.806215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.806224 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.809203 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.825238 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: E1210 14:32:16.828387 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.849916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.849960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.849970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.850021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.850106 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.852426 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.875304 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: E1210 14:32:16.881394 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.886581 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.886617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.886633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.886657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.886675 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.890457 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.902533 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: E1210 14:32:16.975472 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.995583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.995656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.995668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.995689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.996081 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4727]: I1210 14:32:16.998472 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: E1210 14:32:17.011877 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.017619 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.018231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.018311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.018335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.018389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.018418 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.181538 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: E1210 14:32:17.181042 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: E1210 14:32:17.182127 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.188167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.188238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.188252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.188275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.188322 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.292538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.292589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.292606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.292638 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.292649 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.301500 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.310449 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt"] Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.311326 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: W1210 14:32:17.319950 4727 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: secrets "ovn-kubernetes-control-plane-dockercfg-gs7dd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 10 14:32:17 crc kubenswrapper[4727]: E1210 14:32:17.320060 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-control-plane-dockercfg-gs7dd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:32:17 crc kubenswrapper[4727]: W1210 14:32:17.320152 4727 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: secrets "ovn-control-plane-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 10 14:32:17 crc kubenswrapper[4727]: E1210 14:32:17.320173 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-control-plane-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.327426 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.347527 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.349509 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2251dee-3373-4fb3-b1cd-56003fa83f22-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.349576 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2251dee-3373-4fb3-b1cd-56003fa83f22-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.349642 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz6ht\" (UniqueName: \"kubernetes.io/projected/c2251dee-3373-4fb3-b1cd-56003fa83f22-kube-api-access-fz6ht\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.349677 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2251dee-3373-4fb3-b1cd-56003fa83f22-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.367706 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.390042 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.395532 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.395570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.395580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.395598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.395612 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.451438 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz6ht\" (UniqueName: \"kubernetes.io/projected/c2251dee-3373-4fb3-b1cd-56003fa83f22-kube-api-access-fz6ht\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.451491 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2251dee-3373-4fb3-b1cd-56003fa83f22-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.451531 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2251dee-3373-4fb3-b1cd-56003fa83f22-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.451562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2251dee-3373-4fb3-b1cd-56003fa83f22-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.453062 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2251dee-3373-4fb3-b1cd-56003fa83f22-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.453407 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2251dee-3373-4fb3-b1cd-56003fa83f22-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.478857 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz6ht\" (UniqueName: \"kubernetes.io/projected/c2251dee-3373-4fb3-b1cd-56003fa83f22-kube-api-access-fz6ht\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.499624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.499670 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.499680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.499703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.499714 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.503817 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.521002 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.535821 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.555995 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.562437 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.562534 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:17 crc kubenswrapper[4727]: E1210 14:32:17.562651 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:17 crc kubenswrapper[4727]: E1210 14:32:17.562798 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.573422 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.589834 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.603523 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.603568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.603580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.603599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.603611 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.609927 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.636642 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.661998 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.680948 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.700117 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.707631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.707673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.707683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.707706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.707720 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.719053 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.734586 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.753929 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.785776 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.803408 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.810752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.810832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.810843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.810864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.810875 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.820361 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.841886 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.864110 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.897983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.899724 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.901488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.901967 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.914076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.914140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.914161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.914195 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.914213 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.940567 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.961286 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4727]: I1210 14:32:17.981078 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.017854 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.017947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.017964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.017995 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.018011 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.033720 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.120299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.124181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.124198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.124237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.124253 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.124789 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.161684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.179054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.226566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.226627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.226642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.226662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.226675 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.249535 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.264613 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.282974 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.302693 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.321309 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.329190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.329253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.329266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.329288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.329312 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.340704 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.361860 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.432691 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.432753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.432767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.432789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.432802 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4727]: E1210 14:32:18.453038 4727 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Dec 10 14:32:18 crc kubenswrapper[4727]: E1210 14:32:18.453217 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2251dee-3373-4fb3-b1cd-56003fa83f22-ovn-control-plane-metrics-cert podName:c2251dee-3373-4fb3-b1cd-56003fa83f22 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:18.953156972 +0000 UTC m=+43.147931514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/c2251dee-3373-4fb3-b1cd-56003fa83f22-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-749d76644c-xnpwt" (UID: "c2251dee-3373-4fb3-b1cd-56003fa83f22") : failed to sync secret cache: timed out waiting for the condition Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.458741 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.466241 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.577749 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:18 crc kubenswrapper[4727]: E1210 14:32:18.577966 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.579463 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.579607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.579623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.579648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.579661 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.659281 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wwmwn"] Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.659845 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:18 crc kubenswrapper[4727]: E1210 14:32:18.659970 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.677883 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5bh9\" (UniqueName: \"kubernetes.io/projected/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-kube-api-access-l5bh9\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.677967 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.682748 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.682773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.682782 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.682796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.682808 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.692153 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.702678 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.716260 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.732054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.744722 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.756065 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.771723 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.779102 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.779338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5bh9\" (UniqueName: \"kubernetes.io/projected/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-kube-api-access-l5bh9\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:18 crc kubenswrapper[4727]: E1210 14:32:18.779780 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:18 crc kubenswrapper[4727]: E1210 14:32:18.779920 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs podName:2bcea03d-69bd-4530-91b9-ca3ba1ffc871 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:19.279854275 +0000 UTC m=+43.474628817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs") pod "network-metrics-daemon-wwmwn" (UID: "2bcea03d-69bd-4530-91b9-ca3ba1ffc871") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.789114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.789176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.789187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.789223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.789235 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.796156 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.806736 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5bh9\" (UniqueName: \"kubernetes.io/projected/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-kube-api-access-l5bh9\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.812648 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.829040 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.850301 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.867504 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.881472 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.891383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.891418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.891427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.891443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.891455 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.895801 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.908747 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.912169 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.980261 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2251dee-3373-4fb3-b1cd-56003fa83f22-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.984230 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2251dee-3373-4fb3-b1cd-56003fa83f22-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xnpwt\" (UID: \"c2251dee-3373-4fb3-b1cd-56003fa83f22\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.994396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.994442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.994455 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.994479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4727]: I1210 14:32:18.994496 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.104324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.104569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.104584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.104605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.104620 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.140613 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.207343 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.207383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.207392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.207407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.207419 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.283733 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:19 crc kubenswrapper[4727]: E1210 14:32:19.284003 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:19 crc kubenswrapper[4727]: E1210 14:32:19.284125 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs podName:2bcea03d-69bd-4530-91b9-ca3ba1ffc871 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:20.284096188 +0000 UTC m=+44.478870750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs") pod "network-metrics-daemon-wwmwn" (UID: "2bcea03d-69bd-4530-91b9-ca3ba1ffc871") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.310897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.310954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.310963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.310980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.310995 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.415307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.415400 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.415426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.415459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.415485 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4727]: W1210 14:32:19.436751 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2251dee_3373_4fb3_b1cd_56003fa83f22.slice/crio-16ea8a281b07a4796147764fc8555ad7bd75f30cf77f8134b23d5fe3f7276599 WatchSource:0}: Error finding container 16ea8a281b07a4796147764fc8555ad7bd75f30cf77f8134b23d5fe3f7276599: Status 404 returned error can't find the container with id 16ea8a281b07a4796147764fc8555ad7bd75f30cf77f8134b23d5fe3f7276599 Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.519201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.519256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.519269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.519291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.519303 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.562570 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.562610 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:19 crc kubenswrapper[4727]: E1210 14:32:19.562809 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:19 crc kubenswrapper[4727]: E1210 14:32:19.562983 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.622072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.622122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.622138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.622158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.622171 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.724848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.724886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.724895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.724933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.724945 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.828078 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.828136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.828148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.828171 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.828188 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.914054 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" event={"ID":"c2251dee-3373-4fb3-b1cd-56003fa83f22","Type":"ContainerStarted","Data":"16ea8a281b07a4796147764fc8555ad7bd75f30cf77f8134b23d5fe3f7276599"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.916231 4727 generic.go:334] "Generic (PLEG): container finished" podID="8e83cbea-272d-4fcf-a39b-f2a60adbfb9d" containerID="336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3" exitCode=0 Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.916278 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerDied","Data":"336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3"} Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.940692 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.952663 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.965840 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.978950 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4727]: I1210 14:32:19.995383 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.010851 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.025167 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.040840 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.052055 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.055215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.055252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.055261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.055277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.055287 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.066089 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.079375 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.092086 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.102836 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.118683 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.135342 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.157783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.157812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.157821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.157836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.157847 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.260164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.260202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.260211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.260227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.260238 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.326183 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:20 crc kubenswrapper[4727]: E1210 14:32:20.326448 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:20 crc kubenswrapper[4727]: E1210 14:32:20.326593 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs podName:2bcea03d-69bd-4530-91b9-ca3ba1ffc871 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:22.326544487 +0000 UTC m=+46.521319029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs") pod "network-metrics-daemon-wwmwn" (UID: "2bcea03d-69bd-4530-91b9-ca3ba1ffc871") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.363575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.363638 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.363649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.363665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.363675 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.466650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.466698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.466715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.466732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.466743 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.563036 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:20 crc kubenswrapper[4727]: E1210 14:32:20.563241 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.563722 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:20 crc kubenswrapper[4727]: E1210 14:32:20.563806 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.568660 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.568695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.568702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.568716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.568727 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.708986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.709044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.709057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.709078 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.709092 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.811965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.812022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.812036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.812057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.812073 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.964701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.964733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.964742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.964757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.964767 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.967659 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerStarted","Data":"f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.972028 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} Dec 10 14:32:20 crc kubenswrapper[4727]: I1210 14:32:20.973419 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" event={"ID":"c2251dee-3373-4fb3-b1cd-56003fa83f22","Type":"ContainerStarted","Data":"168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.070591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.070869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.070877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.070892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.070916 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.173325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.173364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.173372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.173387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.173397 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.276253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.276286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.276295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.276309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.276319 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.378555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.378594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.378602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.378618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.378629 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.480987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.481020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.481030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.481045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.481054 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.562783 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:21 crc kubenswrapper[4727]: E1210 14:32:21.563123 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.563224 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:21 crc kubenswrapper[4727]: E1210 14:32:21.563361 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.584365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.584427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.584444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.584467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.584483 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.687704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.687801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.687833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.687882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.687953 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.793122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.793180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.793199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.793224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.793244 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.896371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.896415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.896426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.896445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.896457 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.998469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.998512 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.998521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.998535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4727]: I1210 14:32:21.998546 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.101361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.101419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.101429 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.101447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.101462 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.204645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.204706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.204715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.204734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.204748 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.309116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.309169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.309210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.309228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.309238 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.377522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:22 crc kubenswrapper[4727]: E1210 14:32:22.377695 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:22 crc kubenswrapper[4727]: E1210 14:32:22.377784 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs podName:2bcea03d-69bd-4530-91b9-ca3ba1ffc871 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:26.377754838 +0000 UTC m=+50.572529380 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs") pod "network-metrics-daemon-wwmwn" (UID: "2bcea03d-69bd-4530-91b9-ca3ba1ffc871") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.412304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.412356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.412365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.412389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.412402 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.515894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.515979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.515997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.516019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.516032 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.562315 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.562351 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:22 crc kubenswrapper[4727]: E1210 14:32:22.562486 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:22 crc kubenswrapper[4727]: E1210 14:32:22.562607 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.619033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.619074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.619084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.619098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.619108 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.721541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.721606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.721620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.721637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.721647 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.825151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.825217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.825228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.825245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.825257 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.928416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.928493 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.928509 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.928533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.928547 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4727]: I1210 14:32:22.995988 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.014562 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.029293 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.031043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.031090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.031101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.031122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.031134 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.040197 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.053119 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.071547 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.096146 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.109451 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.123892 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.134115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.134232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.134244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.134264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.134277 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.135113 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.154792 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.165476 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.175990 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.187290 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.199854 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.236787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.236831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.236840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.236856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.236867 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.284654 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.284937 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:39.284889102 +0000 UTC m=+63.479663644 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.344655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.344705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.344714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.344731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.344746 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.385420 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.385512 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.385621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.385625 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.385683 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.385745 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:39.385719964 +0000 UTC m=+63.580494506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.385808 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.385859 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.385883 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.385897 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.385967 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:39.385898358 +0000 UTC m=+63.580672920 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.385819 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.386010 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:39.38598291 +0000 UTC m=+63.580757472 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.386035 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.386052 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.386095 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:39.386081992 +0000 UTC m=+63.580856554 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.447615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.447668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.447680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.447699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.447711 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.552073 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.552161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.552177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.552200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.552213 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.562236 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.562469 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.563096 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:23 crc kubenswrapper[4727]: E1210 14:32:23.563739 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.655785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.655825 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.655833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.655848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.655857 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.758107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.758159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.758167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.758185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.758198 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.861757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.861807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.861816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.861831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.861841 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.965931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.966075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.966094 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.966119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.966173 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.989357 4727 generic.go:334] "Generic (PLEG): container finished" podID="8e83cbea-272d-4fcf-a39b-f2a60adbfb9d" containerID="f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7" exitCode=0 Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.989422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerDied","Data":"f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7"} Dec 10 14:32:23 crc kubenswrapper[4727]: I1210 14:32:23.993259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" event={"ID":"c2251dee-3373-4fb3-b1cd-56003fa83f22","Type":"ContainerStarted","Data":"a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c"} Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.004331 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.015712 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.030031 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.055852 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.069553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.069628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.069642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.069663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.069677 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.073681 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.087366 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.101197 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.111993 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.127969 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.141291 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.155618 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.165834 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.172858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.172898 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.172923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.172941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.172955 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.177479 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.188451 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.203922 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.299755 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.299819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.299829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.299846 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.300116 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.448265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.448664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.448677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.448696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.448708 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.590848 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:24 crc kubenswrapper[4727]: E1210 14:32:24.591005 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.591059 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:24 crc kubenswrapper[4727]: E1210 14:32:24.591186 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.591260 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:24 crc kubenswrapper[4727]: E1210 14:32:24.591306 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.593625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.593652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.593660 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.593673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.593683 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.696634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.696710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.696734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.696790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.696807 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.799840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.799895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.799923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.799939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.799947 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.917287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.917321 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.917338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.917361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4727]: I1210 14:32:24.917379 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.019592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.019622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.019633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.019649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.019667 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.066572 4727 generic.go:334] "Generic (PLEG): container finished" podID="8e83cbea-272d-4fcf-a39b-f2a60adbfb9d" containerID="b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318" exitCode=0 Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.066686 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerDied","Data":"b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.083703 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.084834 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.084883 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.084942 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.085508 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.192658 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.197421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.197500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.197514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.197540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.197554 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.203722 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.203867 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.211431 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.224584 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.240895 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.260632 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.276260 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.306753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.306825 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.306850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.306882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.306895 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.341743 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.363832 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.382330 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.406996 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.410156 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.410178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.410188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.410205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.410219 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.421515 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.438216 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.449777 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.463420 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.481543 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.493887 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.513346 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.513399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.513421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.513452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.513492 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.562332 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:25 crc kubenswrapper[4727]: E1210 14:32:25.562559 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.588708 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.609699 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.615998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.616078 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.616094 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.616118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.616131 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.628550 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.678026 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.729445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.729487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.729500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.729519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.729529 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.746444 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.762367 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.774747 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.787999 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.805291 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.832887 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.832950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.832972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.832989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.833000 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.896045 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.912869 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.926890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.935405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.935441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.935450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.935465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.935476 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4727]: I1210 14:32:25.943207 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.038189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.038225 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.038234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.038250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.038260 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.141378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.141457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.141468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.141491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.141504 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.245495 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.245573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.245584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.245603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.245616 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.349177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.349221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.349230 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.349246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.349256 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.418881 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:26 crc kubenswrapper[4727]: E1210 14:32:26.419064 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:26 crc kubenswrapper[4727]: E1210 14:32:26.419152 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs podName:2bcea03d-69bd-4530-91b9-ca3ba1ffc871 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:34.419126491 +0000 UTC m=+58.613901033 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs") pod "network-metrics-daemon-wwmwn" (UID: "2bcea03d-69bd-4530-91b9-ca3ba1ffc871") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.452542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.452594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.452607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.452626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.452640 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.556081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.556129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.556143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.556164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.556180 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.609754 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.609839 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:26 crc kubenswrapper[4727]: E1210 14:32:26.610346 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.609973 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:26 crc kubenswrapper[4727]: E1210 14:32:26.610355 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:26 crc kubenswrapper[4727]: E1210 14:32:26.610595 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.658866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.659152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.659234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.659304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.659385 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.665651 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.680638 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.701281 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.722218 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.738654 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.751977 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.762741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.762788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.762805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.762825 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.762838 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.769109 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.785227 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.796316 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.808510 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.822317 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.836410 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.851627 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.866405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.866443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.866456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.866474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.866485 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.867057 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.881774 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.970175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.970223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.970234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.970253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4727]: I1210 14:32:26.970265 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.074650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.075240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.075372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.075487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.075581 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.105830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerStarted","Data":"fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955"} Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.121079 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.134548 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.164880 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.177158 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.189054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.209316 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.223745 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.229026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.229105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.229121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.229143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.229162 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.237186 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.255745 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.302728 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.332454 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.334513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.334788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.334917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.335000 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.335070 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.352568 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.366851 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.385106 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.404600 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.415415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.415605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.415695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.415786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.415865 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: E1210 14:32:27.433057 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.439167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.439381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.439475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.439576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.439659 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: E1210 14:32:27.453786 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.457616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.457679 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.457692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.457715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.457731 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: E1210 14:32:27.497195 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.502335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.502371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.502379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.502395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.502417 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: E1210 14:32:27.522839 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.527017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.527044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.527054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.527068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.527077 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.562424 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:27 crc kubenswrapper[4727]: E1210 14:32:27.562656 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:27 crc kubenswrapper[4727]: E1210 14:32:27.639086 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4727]: E1210 14:32:27.639230 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.640957 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.641002 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.641020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.641042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.641055 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.744655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.744697 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.744706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.744724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.744735 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.847344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.847656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.847668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.847685 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.847696 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.950821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.950851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.950859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.950872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4727]: I1210 14:32:27.950882 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.052969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.053013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.053023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.053038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.053048 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.156257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.156353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.156369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.156394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.156409 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.259220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.259260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.259269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.259284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.259295 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.361551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.361599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.361608 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.361630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.361641 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.464779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.464816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.464825 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.464840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.464853 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.562280 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.562366 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:28 crc kubenswrapper[4727]: E1210 14:32:28.562486 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.562565 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:28 crc kubenswrapper[4727]: E1210 14:32:28.562584 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:28 crc kubenswrapper[4727]: E1210 14:32:28.563110 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.566688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.566732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.566742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.566759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.566772 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.670373 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.670415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.670427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.670446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.670458 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.773005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.773048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.773058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.773074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.773085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.876193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.876236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.876245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.876260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.876270 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.978998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.979069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.979092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.979128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4727]: I1210 14:32:28.979150 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.083063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.083117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.083131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.083151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.083162 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.186032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.186077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.186089 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.186105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.186117 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.352545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.353284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.353377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.353473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.353546 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.457324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.457381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.457395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.457418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.457442 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.561079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.561145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.561166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.561202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.561219 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.562553 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:29 crc kubenswrapper[4727]: E1210 14:32:29.562670 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.663900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.663959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.663970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.663989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.664001 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.767626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.767698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.767719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.767750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.767774 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.871015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.871063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.871072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.871087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.871097 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.973965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.974020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.974044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.974077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4727]: I1210 14:32:29.974098 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.078013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.078070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.078085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.078121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.078158 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.181316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.181395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.181438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.181475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.181493 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.285222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.285287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.285297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.285313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.285325 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.387701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.387750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.387764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.387786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.387800 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.490136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.490183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.490196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.490216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.490230 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.612773 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.612797 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.612826 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:30 crc kubenswrapper[4727]: E1210 14:32:30.613368 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:30 crc kubenswrapper[4727]: E1210 14:32:30.613478 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:30 crc kubenswrapper[4727]: E1210 14:32:30.613433 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.623124 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.624174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.624226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.624257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.624271 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.727034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.727082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.727093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.727111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.727127 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.830426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.830479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.830494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.830526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.830550 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.933631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.933688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.933700 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.933718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4727]: I1210 14:32:30.933728 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.037360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.037461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.037480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.037507 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.037523 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.128386 4727 generic.go:334] "Generic (PLEG): container finished" podID="8e83cbea-272d-4fcf-a39b-f2a60adbfb9d" containerID="fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955" exitCode=0 Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.128722 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerDied","Data":"fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.140070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.140119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.140136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.140158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.140174 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.153408 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.166557 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.186107 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.204479 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.228713 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.244023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.244101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.244125 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.244158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.244180 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.247619 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.267376 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.305864 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.329103 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.349755 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.350473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.351125 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.351382 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.351506 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.350000 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.370660 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.387782 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.405105 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.426423 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.570682 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.570685 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4727]: E1210 14:32:31.570878 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.572796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.572840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.572854 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.572872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.572883 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.675493 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.675547 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.675561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.675578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.675592 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.778214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.778526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.778588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.778653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.778714 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.882245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.882312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.882325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.882347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.882361 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.985073 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.985129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.985144 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.985161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4727]: I1210 14:32:31.985172 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.088019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.088262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.088345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.088452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.088564 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.191260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.191610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.191768 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.191898 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.192021 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.295506 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.295589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.295599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.295618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.295631 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.416007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.416740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.416808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.416882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.417013 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.520210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.520505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.520709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.520800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.520873 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.563020 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.563287 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:32 crc kubenswrapper[4727]: E1210 14:32:32.675620 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.675693 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:32 crc kubenswrapper[4727]: E1210 14:32:32.675777 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:32 crc kubenswrapper[4727]: E1210 14:32:32.675812 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.679018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.679074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.679086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.679105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.679119 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.782440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.782512 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.782533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.782571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.782606 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.889800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.889867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.889881 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.889919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.889934 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.992821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.992894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.992948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.992978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4727]: I1210 14:32:32.993000 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.096063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.096119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.096136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.096159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.096175 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.148409 4727 generic.go:334] "Generic (PLEG): container finished" podID="8e83cbea-272d-4fcf-a39b-f2a60adbfb9d" containerID="ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8" exitCode=0 Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.148835 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerDied","Data":"ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.166197 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.180323 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.193254 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.198311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.198350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.198359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.198373 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.198384 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.206867 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.220973 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.237618 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.254898 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.267257 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.283556 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.296634 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.300760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.300801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.300813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.300830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.300845 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.309199 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.326638 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.338844 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.350476 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.373500 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.404187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.404252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.404274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.404304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.404324 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.507402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.507461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.507471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.507495 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.507508 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.562873 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:33 crc kubenswrapper[4727]: E1210 14:32:33.563059 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.587288 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.605652 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.613151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.613317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.613526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.613599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.613622 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.619538 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.636405 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.650214 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.664567 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.682570 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.699858 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.717589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.717643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.717655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.717673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.717687 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.721159 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.741283 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.756168 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.776343 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.792972 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.821322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.821404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.821427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.821454 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.821468 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.823018 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.840750 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.853849 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.871274 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:33Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.925312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.925376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.925388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.925406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4727]: I1210 14:32:33.925419 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.028504 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.028553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.028565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.028584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.028603 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.131085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.131127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.131136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.131151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.131163 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.160234 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" event={"ID":"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d","Type":"ContainerStarted","Data":"0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.163727 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/0.log" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.167245 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0" exitCode=1 Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.167314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.168410 4727 scope.go:117] "RemoveContainer" containerID="24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.173384 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.192105 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.212685 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.226822 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.233441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.233500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.233513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.233535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.233548 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.246309 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.261796 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.278743 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.291641 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.304807 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.320213 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.332836 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.337081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.337237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.337399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.337488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.337626 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.348089 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.361877 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.375242 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.390480 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.406569 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.422776 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.435711 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.440570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.440599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.440612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.440631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.440642 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.452582 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.466731 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.480545 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.490938 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.494535 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:34 crc kubenswrapper[4727]: E1210 14:32:34.494753 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:34 crc kubenswrapper[4727]: E1210 14:32:34.494859 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs podName:2bcea03d-69bd-4530-91b9-ca3ba1ffc871 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:50.494832688 +0000 UTC m=+74.689607230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs") pod "network-metrics-daemon-wwmwn" (UID: "2bcea03d-69bd-4530-91b9-ca3ba1ffc871") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.500268 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.514294 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.533771 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"message\\\":\\\"ler 2\\\\nI1210 14:32:33.319711 5950 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:33.321210 5950 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321428 5950 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:33.321440 5950 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:33.321448 5950 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 14:32:33.321456 5950 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:33.321465 5950 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 14:32:33.321693 5950 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321782 5950 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322080 5950 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322463 5950 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.544085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.544131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.544146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.544167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.544184 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.554610 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.562475 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:34 crc kubenswrapper[4727]: E1210 14:32:34.562696 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.562826 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.563191 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:34 crc kubenswrapper[4727]: E1210 14:32:34.563275 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:34 crc kubenswrapper[4727]: E1210 14:32:34.563352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.567709 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.583670 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.598023 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.610224 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.620861 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.633336 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.647540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.647590 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.647606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.647625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.647638 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.750729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.750789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.750800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.750821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.750832 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.809009 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.822633 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.839031 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.854282 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.854753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.854860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.855007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.855135 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.857020 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.870686 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.884567 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.903401 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.922486 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.939575 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.955300 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.962310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.962366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.962376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.962395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.962406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.973078 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:34 crc kubenswrapper[4727]: I1210 14:32:34.990189 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:34Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.003240 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.015969 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.031721 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.053509 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"message\\\":\\\"ler 2\\\\nI1210 14:32:33.319711 5950 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:33.321210 5950 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321428 5950 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:33.321440 5950 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:33.321448 5950 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 14:32:33.321456 5950 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:33.321465 5950 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 14:32:33.321693 5950 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321782 5950 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322080 5950 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322463 5950 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.067860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.067950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.067965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.067989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.068003 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.068591 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.170738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.170785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.170795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.170814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.170827 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.176670 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/0.log" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.182056 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.182618 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.197831 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.215684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.235702 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.251377 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.267416 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.273185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.273236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.273247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.273267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.273282 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.289682 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.308044 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.328403 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.351493 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.367272 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.375572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.375615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.375627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.375647 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.375658 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.381390 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.394532 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.408956 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.423790 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.443638 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"message\\\":\\\"ler 2\\\\nI1210 14:32:33.319711 5950 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:33.321210 5950 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321428 5950 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:33.321440 5950 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:33.321448 5950 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 14:32:33.321456 5950 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:33.321465 5950 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 14:32:33.321693 5950 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321782 5950 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322080 5950 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322463 5950 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.457996 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.478442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.478492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.478502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.478521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.478531 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.562585 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:35 crc kubenswrapper[4727]: E1210 14:32:35.562775 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.581769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.581834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.581848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.581872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.581887 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.684665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.684730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.684746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.684774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.684791 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.788027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.788085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.788102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.788124 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.788138 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.891631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.891683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.891695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.891715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.891726 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.994932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.994988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.995003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.995023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4727]: I1210 14:32:35.995033 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.097884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.097960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.097974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.097998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.098013 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.201192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.201238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.201251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.201272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.201284 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.303397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.303432 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.303442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.303458 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.303468 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.406875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.406986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.407009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.407039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.407061 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.510549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.510852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.510862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.510877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.510890 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.562038 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.562073 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:36 crc kubenswrapper[4727]: E1210 14:32:36.562237 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.562284 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:36 crc kubenswrapper[4727]: E1210 14:32:36.562474 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:36 crc kubenswrapper[4727]: E1210 14:32:36.562585 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.583979 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"message\\\":\\\"ler 2\\\\nI1210 14:32:33.319711 5950 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:33.321210 5950 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321428 5950 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:33.321440 5950 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:33.321448 5950 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 14:32:33.321456 5950 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:33.321465 5950 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 14:32:33.321693 5950 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321782 5950 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322080 5950 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322463 5950 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.600376 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.614714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.614764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.615087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.615140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.615157 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.619005 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.666109 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.692144 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.718281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.718329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.718341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.718359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.718371 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.720420 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.733493 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.745837 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.759833 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.774034 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.786025 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.799207 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.817350 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.821078 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.821110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.821122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.821137 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.821149 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.834612 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.848479 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.862447 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.926668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.926740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.926754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.926777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4727]: I1210 14:32:36.926789 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.029713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.029753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.029761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.029777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.029788 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.133695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.133758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.133770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.133798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.133811 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.196879 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/1.log" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.198530 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/0.log" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.202814 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a" exitCode=1 Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.202899 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.203026 4727 scope.go:117] "RemoveContainer" containerID="24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.203971 4727 scope.go:117] "RemoveContainer" containerID="22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a" Dec 10 14:32:37 crc kubenswrapper[4727]: E1210 14:32:37.204448 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.226357 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.237795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.238107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.238170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.238246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.238309 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.242171 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.257082 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.274596 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.290224 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.307202 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.320499 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.336787 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.341114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.341172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.341184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.341204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.341220 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.357886 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"message\\\":\\\"ler 2\\\\nI1210 14:32:33.319711 5950 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:33.321210 5950 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321428 5950 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:33.321440 5950 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:33.321448 5950 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 14:32:33.321456 5950 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:33.321465 5950 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 14:32:33.321693 5950 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321782 5950 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322080 5950 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322463 5950 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:36Z\\\",\\\"message\\\":\\\"]services.lbConfig(nil)\\\\nF1210 14:32:35.757654 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z]\\\\nI1210 14:32:35.757670 6211 services_controller.go:451] Built service openshift-etcd-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"},\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.376758 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.392827 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.406555 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.419279 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.434261 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.445219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.445277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.445286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.445306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.445318 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.452541 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.468372 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.548885 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.548960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.548971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.548989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.549001 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.562470 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:37 crc kubenswrapper[4727]: E1210 14:32:37.562723 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.652329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.652396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.652411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.652434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.652446 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.677218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.677284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.677302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.677330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.677345 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: E1210 14:32:37.693289 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.698706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.698787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.698800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.698822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.698840 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: E1210 14:32:37.714840 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.719635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.719727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.719738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.719758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.719786 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: E1210 14:32:37.734380 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.740048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.740095 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.740105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.740123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.740134 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: E1210 14:32:37.752743 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.757743 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.757806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.757817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.757836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.757848 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: E1210 14:32:37.771089 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:37Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:37 crc kubenswrapper[4727]: E1210 14:32:37.771346 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.773215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.773279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.773292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.773307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.773320 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.875943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.876476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.876494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.876522 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.876536 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.980023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.980070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.980085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.980107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4727]: I1210 14:32:37.980121 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.082643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.082975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.082998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.083023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.083042 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.185260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.185304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.185319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.185341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.185356 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.208728 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/1.log" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.287980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.288032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.288042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.288059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.288087 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.391168 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.391221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.391234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.391259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.391274 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.494702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.494761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.494775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.494795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.494808 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.562587 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.562609 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:38 crc kubenswrapper[4727]: E1210 14:32:38.562793 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.562750 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:38 crc kubenswrapper[4727]: E1210 14:32:38.562961 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:38 crc kubenswrapper[4727]: E1210 14:32:38.563252 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.598680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.599123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.599211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.599298 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.599368 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.702571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.703643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.703665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.703704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.703721 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.806884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.806941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.806951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.806966 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.806978 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.910280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.910339 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.910350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.910379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4727]: I1210 14:32:38.910403 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.013173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.013237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.013252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.013282 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.013300 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.117331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.117400 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.117417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.117455 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.117499 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.220577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.220653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.220672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.220731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.220754 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.323975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.324033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.324044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.324070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.324080 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.343832 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.344140 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:33:11.344084823 +0000 UTC m=+95.538859545 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.427308 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.427371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.427382 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.427417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.427432 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.444843 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.444918 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.444955 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.444985 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445047 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445080 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445097 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445101 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445101 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445170 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:11.445149491 +0000 UTC m=+95.639924053 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445234 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:11.445199132 +0000 UTC m=+95.639973854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445121 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445371 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445131 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445457 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:11.445443267 +0000 UTC m=+95.640217819 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.445503 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:11.445472608 +0000 UTC m=+95.640247170 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.531063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.531119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.531128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.531154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.531166 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.562789 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:39 crc kubenswrapper[4727]: E1210 14:32:39.562975 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.635966 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.636059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.636075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.636126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.636142 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.738688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.738788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.738802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.738869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.738881 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.842013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.842082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.842096 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.842123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.842141 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.945011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.945077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.945088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.945107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4727]: I1210 14:32:39.945118 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.048888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.048973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.048984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.049004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.049015 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.151781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.151852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.151863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.151892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.151923 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.255502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.255557 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.255571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.255594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.255609 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.358035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.358126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.358158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.358182 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.358193 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.460866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.460952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.460967 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.460987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.460999 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.562394 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.562464 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.562529 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:40 crc kubenswrapper[4727]: E1210 14:32:40.562588 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:40 crc kubenswrapper[4727]: E1210 14:32:40.562694 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:40 crc kubenswrapper[4727]: E1210 14:32:40.564525 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.568338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.568384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.568397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.568415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.568433 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.671334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.671398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.671412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.671439 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.671453 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.775421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.775491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.775501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.775522 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.775537 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.878206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.878248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.878257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.878276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.878286 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.981199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.981256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.981266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.981284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4727]: I1210 14:32:40.981296 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.083680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.083742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.083760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.083786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.083806 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.187157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.187213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.187231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.187251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.187262 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.290746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.290798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.290808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.290828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.290841 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.394747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.394814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.394828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.394848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.394865 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.498465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.498510 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.498519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.498540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.498551 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.562522 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:41 crc kubenswrapper[4727]: E1210 14:32:41.562716 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.600941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.600992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.601006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.601027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.601044 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.703776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.703829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.703841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.703859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.703876 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.806648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.806706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.806719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.806734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.806745 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.909629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.909674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.909685 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.909704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4727]: I1210 14:32:41.909715 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.013026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.013086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.013108 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.013131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.013145 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.116199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.116256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.116266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.116284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.116296 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.220761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.220837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.220857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.220877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.220890 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.324323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.324376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.324388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.324406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.324419 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.427603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.427737 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.427751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.427770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.427780 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.531536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.531584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.531594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.531609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.531627 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.562480 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.562480 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.562724 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:42 crc kubenswrapper[4727]: E1210 14:32:42.562953 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:42 crc kubenswrapper[4727]: E1210 14:32:42.563087 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:42 crc kubenswrapper[4727]: E1210 14:32:42.563297 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.635493 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.635542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.635551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.635567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.635580 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.738104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.738160 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.738175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.738194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.738207 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.840754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.840806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.840816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.840836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.840851 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.943341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.943394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.943405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.943423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4727]: I1210 14:32:42.943435 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.046589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.046641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.046653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.046675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.046690 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.149573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.149620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.149633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.149652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.149676 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.252399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.252445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.252455 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.252475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.252489 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.355703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.355757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.355769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.355791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.355809 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.459176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.459208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.459216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.459245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.459257 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.562829 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:43 crc kubenswrapper[4727]: E1210 14:32:43.563186 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.563522 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.563594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.563632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.563671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.563695 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.666974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.667061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.667088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.667126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.667152 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.771745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.771796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.771809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.771827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.771840 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.875510 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.875581 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.875599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.875634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.875658 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.978831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.978924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.978938 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.978958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4727]: I1210 14:32:43.978969 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.083754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.083839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.083853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.083882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.083930 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.186418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.186456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.186470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.186486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.186497 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.289867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.290010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.290035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.290069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.290085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.393708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.393759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.393773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.393793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.393812 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.496883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.497198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.497259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.497321 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.497384 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.562471 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.562820 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.562595 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:44 crc kubenswrapper[4727]: E1210 14:32:44.563151 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:44 crc kubenswrapper[4727]: E1210 14:32:44.563312 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:44 crc kubenswrapper[4727]: E1210 14:32:44.563410 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.600650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.600740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.600750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.600768 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.600782 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.703297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.703338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.703350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.703372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.703384 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.805923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.806502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.806567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.806637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.806708 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.941145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.941199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.941212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.941251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4727]: I1210 14:32:44.941270 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.044481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.044845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.044978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.045054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.045125 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.148612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.149153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.149248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.149449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.149650 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.252224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.252276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.252287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.252310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.252323 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.355191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.355496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.355615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.355702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.355782 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.458553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.458600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.458612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.458630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.458644 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.562132 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.562139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.562217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.562240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.562291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.562319 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: E1210 14:32:45.562371 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.664861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.664938 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.664953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.664976 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.664990 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.768139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.768186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.768195 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.768210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.768220 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.871148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.871198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.871207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.871223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.871236 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.974138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.974197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.974210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.974234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4727]: I1210 14:32:45.974248 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.077511 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.077561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.077572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.077591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.077602 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.180460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.180514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.180528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.180549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.180564 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.283601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.283656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.283667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.283690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.283702 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.386684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.386732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.386749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.386769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.386782 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.489753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.489808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.489816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.489838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.489848 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.562581 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.562690 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.562603 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:46 crc kubenswrapper[4727]: E1210 14:32:46.562797 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:46 crc kubenswrapper[4727]: E1210 14:32:46.562954 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:46 crc kubenswrapper[4727]: E1210 14:32:46.563033 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.581549 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.592621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.592685 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.592705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.592736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.592755 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.595446 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.616871 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.630648 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.651741 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.669815 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.681863 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.697590 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.697676 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.697688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.697713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.697736 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.698515 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.713101 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.728287 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.746250 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.763080 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.777557 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.793866 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.934968 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.935011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.935021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.935038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.935050 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.953413 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c95a16f0900852239d64d0dc69c20f43909eb1326cd9d4635350b6f7be39a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"message\\\":\\\"ler 2\\\\nI1210 14:32:33.319711 5950 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:33.321210 5950 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321428 5950 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:33.321440 5950 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:33.321448 5950 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 14:32:33.321456 5950 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:33.321465 5950 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 14:32:33.321693 5950 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:33.321782 5950 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322080 5950 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:33.322463 5950 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:36Z\\\",\\\"message\\\":\\\"]services.lbConfig(nil)\\\\nF1210 14:32:35.757654 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z]\\\\nI1210 14:32:35.757670 6211 services_controller.go:451] Built service openshift-etcd-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"},\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4727]: I1210 14:32:46.967043 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.038190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.038759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.038884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.038997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.039083 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.141777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.141848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.141859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.141889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.141925 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.245201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.245246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.245258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.245278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.245297 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.348285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.348337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.348353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.348376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.348390 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.451321 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.451577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.451648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.451728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.451802 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.555247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.555284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.555293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.555316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.555333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.562584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:47 crc kubenswrapper[4727]: E1210 14:32:47.562768 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.658632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.658931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.659018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.659221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.659299 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.761884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.762130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.762196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.762267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.762377 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.800673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.800743 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.800762 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.800790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.800811 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: E1210 14:32:47.817763 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.823673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.823735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.823767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.823802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.823826 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: E1210 14:32:47.844799 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.849290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.849317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.849327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.849344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.849357 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: E1210 14:32:47.866588 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.871130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.871188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.871202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.871224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.871239 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: E1210 14:32:47.884464 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.888785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.888831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.888847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.888873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.888894 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4727]: E1210 14:32:47.903577 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4727]: E1210 14:32:47.903684 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.905121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.905147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.905158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.905173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4727]: I1210 14:32:47.905185 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.007835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.007923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.007936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.007958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.007975 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.114192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.114234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.114242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.114257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.114270 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.217134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.217183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.217192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.217209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.217221 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.319867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.319933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.319946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.319969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.319984 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.422533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.422865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.422975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.423075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.423167 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.526583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.526654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.526671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.526698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.526717 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.562318 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:48 crc kubenswrapper[4727]: E1210 14:32:48.562558 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.562727 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.562925 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:48 crc kubenswrapper[4727]: E1210 14:32:48.563085 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:48 crc kubenswrapper[4727]: E1210 14:32:48.563159 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.630513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.630569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.630582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.630601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.630612 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.734458 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.734525 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.734541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.734562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.734578 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.838392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.838442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.838453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.838471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.838481 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.941839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.941889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.941913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.941934 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4727]: I1210 14:32:48.941946 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.044856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.044900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.044931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.044951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.044964 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.149080 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.149150 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.149160 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.149179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.149188 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.260984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.261033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.261042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.261060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.261071 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.363760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.363822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.363830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.363847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.363859 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.466850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.466884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.466893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.466931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.466943 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.562828 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:49 crc kubenswrapper[4727]: E1210 14:32:49.563077 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.570176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.570216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.570224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.570241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.570253 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.674021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.674065 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.674073 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.674091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.674104 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.776179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.776225 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.776235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.776253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.776270 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.879362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.879399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.879412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.879428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.879440 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.981834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.981896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.981933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.981952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4727]: I1210 14:32:49.981965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.084852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.084928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.084942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.084961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.084975 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.187593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.187652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.187676 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.187702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.187811 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.291801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.291882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.291922 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.291961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.291986 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.400140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.400220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.400233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.400254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.400274 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.503327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.503390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.503404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.503427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.503441 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.562968 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.563027 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:50 crc kubenswrapper[4727]: E1210 14:32:50.563159 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.563340 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:50 crc kubenswrapper[4727]: E1210 14:32:50.563363 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:50 crc kubenswrapper[4727]: E1210 14:32:50.563893 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.573734 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:50 crc kubenswrapper[4727]: E1210 14:32:50.574201 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:50 crc kubenswrapper[4727]: E1210 14:32:50.574312 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs podName:2bcea03d-69bd-4530-91b9-ca3ba1ffc871 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:22.574284421 +0000 UTC m=+106.769058963 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs") pod "network-metrics-daemon-wwmwn" (UID: "2bcea03d-69bd-4530-91b9-ca3ba1ffc871") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.606378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.606429 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.606441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.606459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.606472 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.710056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.710122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.710137 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.710161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.710179 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.813956 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.814018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.814030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.814053 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.814067 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.916700 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.916760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.916774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.916793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4727]: I1210 14:32:50.916806 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.020135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.020202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.020218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.020243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.020261 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.122992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.123065 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.123105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.123127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.123147 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.226243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.226288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.226299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.226320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.226330 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.329591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.329637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.329648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.329667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.329683 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.432316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.432370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.432380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.432400 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.432412 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.536220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.536283 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.536302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.536329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.536347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.562454 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:51 crc kubenswrapper[4727]: E1210 14:32:51.562675 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.564273 4727 scope.go:117] "RemoveContainer" containerID="22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.580735 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.602687 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.622883 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.639562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.639631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.639645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.639666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.639679 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.640719 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.656504 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.677338 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.695483 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.708863 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.726315 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.740649 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.742625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.742903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.742996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.743018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.743037 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.756248 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.767796 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.779975 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.794870 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.814689 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:36Z\\\",\\\"message\\\":\\\"]services.lbConfig(nil)\\\\nF1210 14:32:35.757654 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z]\\\\nI1210 14:32:35.757670 6211 services_controller.go:451] Built service openshift-etcd-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"},\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.828697 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:51Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.845582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.845615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.845624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.845640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.845651 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.948795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.948842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.948853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.948872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4727]: I1210 14:32:51.948889 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.053180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.053244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.053259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.053283 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.053304 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.157041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.157098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.157110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.157132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.157190 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.281546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.281735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.281773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.281973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.282493 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.289335 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/1.log" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.296199 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33"} Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.297749 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.330840 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.346530 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.364917 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.386064 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.400964 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.420092 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.438868 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.451797 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.468616 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.482300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.482366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.482381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.482402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.482416 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.485001 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.504932 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.588565 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.588777 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.588786 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.588791 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:52 crc kubenswrapper[4727]: E1210 14:32:52.588940 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:52 crc kubenswrapper[4727]: E1210 14:32:52.589212 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:52 crc kubenswrapper[4727]: E1210 14:32:52.589350 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.590505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.590538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.590552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.590571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.590582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.610391 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.635095 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.657840 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:36Z\\\",\\\"message\\\":\\\"]services.lbConfig(nil)\\\\nF1210 14:32:35.757654 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z]\\\\nI1210 14:32:35.757670 6211 services_controller.go:451] Built service openshift-etcd-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"},\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.673059 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:52Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.694123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.694175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.694185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.694203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.694218 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.798383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.798493 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.798555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.798627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.798659 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.902438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.902491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.902505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.902524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4727]: I1210 14:32:52.902538 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.017007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.017066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.017076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.017104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.017120 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.119178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.119229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.119238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.119255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.119265 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.222189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.222242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.222253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.222270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.222281 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.325154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.325199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.325211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.325232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.325246 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.429370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.429440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.429450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.429469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.429482 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.537919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.537983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.537995 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.538012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.538025 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.562186 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:53 crc kubenswrapper[4727]: E1210 14:32:53.562351 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.642517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.642596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.642617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.642644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.642667 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.752940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.752991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.753002 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.753019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.753030 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.856369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.856430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.856440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.856460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.856472 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.960471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.960548 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.960584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.960635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4727]: I1210 14:32:53.960655 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.064743 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.065061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.065183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.065222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.065245 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.168836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.169002 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.169032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.169072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.169098 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.272286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.272349 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.272362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.272382 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.272403 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.307015 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/2.log" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.307689 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/1.log" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.376340 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.376394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.376412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.376490 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.376505 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.377401 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33" exitCode=1 Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.377471 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.377525 4727 scope.go:117] "RemoveContainer" containerID="22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.378893 4727 scope.go:117] "RemoveContainer" containerID="528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33" Dec 10 14:32:54 crc kubenswrapper[4727]: E1210 14:32:54.379171 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.393769 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.404815 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.429969 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:36Z\\\",\\\"message\\\":\\\"]services.lbConfig(nil)\\\\nF1210 14:32:35.757654 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z]\\\\nI1210 14:32:35.757670 6211 services_controller.go:451] Built service openshift-etcd-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"},\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.447590 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.465714 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.480716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.481183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.482081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.482209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.482324 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.483377 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.500214 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.522232 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.536991 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.558814 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.562434 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.562528 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.562590 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:54 crc kubenswrapper[4727]: E1210 14:32:54.563072 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:54 crc kubenswrapper[4727]: E1210 14:32:54.563213 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:54 crc kubenswrapper[4727]: E1210 14:32:54.563347 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.580226 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.583427 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.585084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.585128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.585138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.585155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.585166 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.600638 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.618396 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.636373 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.655600 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.671299 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.688079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.688142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.688154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.688178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.688192 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.791283 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.791347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.791362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.791390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.791406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.894360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.894423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.894434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.894454 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.894466 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.997809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.997875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.997887 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.997925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4727]: I1210 14:32:54.997938 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.102446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.102541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.102562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.102629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.102651 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.206860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.206976 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.206990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.207010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.207023 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.311004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.311428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.311505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.311587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.311721 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.383855 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/2.log" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.415789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.415867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.415883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.415939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.415961 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.518991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.519046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.519058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.519075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.519086 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.562584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:55 crc kubenswrapper[4727]: E1210 14:32:55.562760 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.621500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.621950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.622063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.622133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.622207 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.725750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.725861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.725876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.725901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.725948 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.830092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.830194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.830213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.830244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.830264 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.933883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.934316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.934423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.934569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4727]: I1210 14:32:55.934650 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.158410 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.158520 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.158579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.158634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.158685 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.547154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.547224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.547234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.547281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.547295 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.562819 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:56 crc kubenswrapper[4727]: E1210 14:32:56.563002 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.563057 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:56 crc kubenswrapper[4727]: E1210 14:32:56.563211 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.564256 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:56 crc kubenswrapper[4727]: E1210 14:32:56.564336 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.580688 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.589480 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.609749 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:36Z\\\",\\\"message\\\":\\\"]services.lbConfig(nil)\\\\nF1210 14:32:35.757654 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z]\\\\nI1210 14:32:35.757670 6211 services_controller.go:451] Built service openshift-etcd-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"},\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.629032 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.645547 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.652141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.652181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.652192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.652222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.652236 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.668185 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.685167 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.701030 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.719610 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.740674 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.757515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.757635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.757653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.757675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.757710 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.759262 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.772890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.787509 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.800806 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.813846 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.827183 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.841853 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.859613 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.861105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.861148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.861159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.861179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.861192 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.964389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.964438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.964450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.964505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4727]: I1210 14:32:56.964520 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.068579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.068657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.068668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.068688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.068700 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.172058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.172101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.172113 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.172149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.172162 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.275068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.275118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.275131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.275154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.275168 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.378019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.378083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.378097 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.378123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.378141 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.481205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.481255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.481271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.481297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.481317 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.562784 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:57 crc kubenswrapper[4727]: E1210 14:32:57.562946 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.584568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.584613 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.584623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.584658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.584671 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.687729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.687845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.687864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.687896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.687954 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.791022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.791100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.791141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.791164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.791180 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.894074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.894501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.894519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.894543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.894559 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.997779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.997861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.997871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.997888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4727]: I1210 14:32:57.997899 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.100787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.100829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.100840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.100858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.100870 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.202707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.202769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.202785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.202807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.202820 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: E1210 14:32:58.233805 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:58Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.239494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.239577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.239595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.239623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.239639 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: E1210 14:32:58.267310 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:58Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.275298 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.275355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.275365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.275384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.275395 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: E1210 14:32:58.293112 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:58Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.297628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.297689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.297703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.297726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.297740 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: E1210 14:32:58.311087 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:58Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.315964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.316007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.316018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.316034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.316049 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: E1210 14:32:58.327614 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:58Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:58 crc kubenswrapper[4727]: E1210 14:32:58.327729 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.329024 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.329054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.329064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.329076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.329085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.431480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.431526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.431536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.431551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.431562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.535348 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.535401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.535411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.535429 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.535441 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.562861 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.563040 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.563032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:58 crc kubenswrapper[4727]: E1210 14:32:58.563207 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:32:58 crc kubenswrapper[4727]: E1210 14:32:58.563316 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:58 crc kubenswrapper[4727]: E1210 14:32:58.563473 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.638231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.638293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.638303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.638319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.638331 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.740974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.741018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.741028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.741044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.741056 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.844349 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.844402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.844414 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.844436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.844455 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.948033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.948093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.948105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.948124 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4727]: I1210 14:32:58.948138 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.052290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.052350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.052362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.052384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.052398 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.156935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.157000 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.157013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.157044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.157062 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.260024 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.260067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.260078 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.260096 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.260107 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.363088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.363151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.363163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.363185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.363202 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.467419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.467478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.467492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.467513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.467535 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.563030 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:59 crc kubenswrapper[4727]: E1210 14:32:59.563215 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.570623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.570700 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.570718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.570740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.570756 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.673829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.673947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.673959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.673978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.673988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.777717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.777807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.777818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.777840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.777854 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.881285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.881381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.881399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.881421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.881435 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.984083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.984151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.984170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.984192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4727]: I1210 14:32:59.984207 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.086841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.086893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.086920 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.086941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.086953 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.190481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.190541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.190551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.190571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.190582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.294454 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.294505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.294516 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.294533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.294544 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.396895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.396953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.396965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.396984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.396995 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.499997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.500045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.500055 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.500074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.500085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.564255 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:00 crc kubenswrapper[4727]: E1210 14:33:00.564386 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.564567 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:00 crc kubenswrapper[4727]: E1210 14:33:00.564619 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.564763 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:00 crc kubenswrapper[4727]: E1210 14:33:00.564831 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.603584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.603631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.603646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.603666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.603679 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.706881 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.706962 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.706975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.707021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.707041 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.810016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.810088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.810104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.810130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.810142 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.914024 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.914088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.914098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.914121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4727]: I1210 14:33:00.914138 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.016333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.016379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.016392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.016413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.016425 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.119746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.119796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.119805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.119827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.119838 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.223611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.223652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.223663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.223683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.223692 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.326781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.326817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.326825 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.326841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.326850 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.429425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.429475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.429486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.429507 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.429551 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.532858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.532972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.532994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.533020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.533038 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.562267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:01 crc kubenswrapper[4727]: E1210 14:33:01.562402 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.636635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.636683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.636695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.636717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.636729 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.739918 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.739959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.739971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.740012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.740024 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.843648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.843707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.843733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.843757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.843774 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.946338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.946403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.946416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.946441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4727]: I1210 14:33:01.946452 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.049005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.049049 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.049061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.049079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.049093 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.151578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.151676 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.151691 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.151721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.151738 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.255129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.255275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.255288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.255305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.255314 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.358709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.358796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.358809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.358832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.358845 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.462715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.462758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.462775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.462805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.462817 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.562309 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.562313 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.563232 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:02 crc kubenswrapper[4727]: E1210 14:33:02.563125 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:02 crc kubenswrapper[4727]: E1210 14:33:02.565121 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:02 crc kubenswrapper[4727]: E1210 14:33:02.567558 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.570534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.570582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.570592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.570610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.570623 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.672929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.672975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.672985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.672999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.673008 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.776723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.776780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.776791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.776812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.776821 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.879945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.879987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.879997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.880016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.880030 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.982698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.982741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.982749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.982767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4727]: I1210 14:33:02.982777 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.085562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.085609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.085620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.085640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.085650 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.188206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.188253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.188268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.188288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.188299 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.292131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.292192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.292205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.292224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.292238 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.395467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.395510 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.395519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.395536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.395546 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.498774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.498823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.498835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.498857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.498870 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.562557 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:03 crc kubenswrapper[4727]: E1210 14:33:03.562749 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.601587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.601644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.601655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.601672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.601683 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.705008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.705041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.705050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.705066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.705075 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.807401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.807476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.807488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.807508 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.807519 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.910400 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.910440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.910448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.910464 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4727]: I1210 14:33:03.910475 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.013535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.013586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.013600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.013620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.013631 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.117023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.117069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.117081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.117103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.117115 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.220232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.220302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.220322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.220355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.220377 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.324081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.324137 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.324146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.324164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.324177 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.427872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.427948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.427960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.427981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.427994 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.531479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.531543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.531555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.531586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.531601 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.562164 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:04 crc kubenswrapper[4727]: E1210 14:33:04.562348 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.562421 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.562535 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:04 crc kubenswrapper[4727]: E1210 14:33:04.562609 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:04 crc kubenswrapper[4727]: E1210 14:33:04.562842 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.635421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.635481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.635496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.635520 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.635538 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.743606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.743689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.743713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.743744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.743768 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.847726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.848079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.848248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.848390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.848521 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.952284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.952454 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.952467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.952488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4727]: I1210 14:33:04.952501 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.056029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.056305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.056386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.056492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.056578 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.159708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.159754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.159763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.159781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.159793 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.263069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.263130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.263143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.263168 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.263180 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.367101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.367233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.367282 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.367307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.367320 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.471246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.471323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.471337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.471372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.471388 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.563107 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:05 crc kubenswrapper[4727]: E1210 14:33:05.563378 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.574879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.574978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.574990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.575023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.575035 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.678381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.678445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.678456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.678477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.678491 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.781112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.781162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.781175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.781197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.781209 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.884452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.884534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.884548 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.884603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.884620 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.987609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.987682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.987698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.987725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4727]: I1210 14:33:05.987740 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.090734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.090786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.090795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.090814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.090827 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.193109 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.193199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.193212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.193232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.193268 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.296109 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.296169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.296180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.296200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.296212 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.399605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.399681 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.399708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.399739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.399765 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.504701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.504810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.504834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.505466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.505665 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.562702 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.562820 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:06 crc kubenswrapper[4727]: E1210 14:33:06.562899 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:06 crc kubenswrapper[4727]: E1210 14:33:06.563033 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.563099 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:06 crc kubenswrapper[4727]: E1210 14:33:06.563165 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.579796 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.598846 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.611260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.611305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.611317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.611338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.611352 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.613757 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.629460 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.644305 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.665785 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.681106 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.695311 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.710699 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.714458 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.714491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.714500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.714516 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.714526 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.735785 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.757125 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e747d26c6a9cc568b690991ae4653a35636d028dba4298066111da7b1d094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:36Z\\\",\\\"message\\\":\\\"]services.lbConfig(nil)\\\\nF1210 14:32:35.757654 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z]\\\\nI1210 14:32:35.757670 6211 services_controller.go:451] Built service openshift-etcd-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"},\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.770159 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.788153 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.805011 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.817668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.817715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.817723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.817741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.817750 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.822258 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.836965 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.852679 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.868549 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.920247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.920306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.920323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.920345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4727]: I1210 14:33:06.920361 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.023888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.023966 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.023977 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.023998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.024011 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.128608 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.128683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.128695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.128722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.128741 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.232082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.232145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.232163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.232189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.232240 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.335882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.335963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.335978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.336005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.336025 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.440442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.440511 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.440524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.440546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.440562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.544107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.544214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.544233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.544300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.544329 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.562240 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:07 crc kubenswrapper[4727]: E1210 14:33:07.562444 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.647853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.647951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.647971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.648000 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.648021 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.756921 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.756971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.756981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.756999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.757010 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.860989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.861038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.861050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.861068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.861080 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.964518 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.964565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.964576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.964594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4727]: I1210 14:33:07.964606 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.066841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.066887 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.066895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.066954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.066966 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.170028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.170093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.170112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.170139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.170157 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.273491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.273554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.273567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.273617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.273633 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.361100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.361151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.361164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.361183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.361197 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.379671 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.384889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.385003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.385050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.385069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.385082 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.400914 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.405189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.405220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.405237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.405314 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.405329 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.419658 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.424433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.424470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.424482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.424498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.424511 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.438705 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.443412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.443500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.443514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.443538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.443554 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.456503 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.456682 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.458775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.458835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.458845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.458866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.458877 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.562159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.562221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.562240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.562264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.562282 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.563199 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.563366 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.563449 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.563636 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.563772 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.564772 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.565564 4727 scope.go:117] "RemoveContainer" containerID="528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33" Dec 10 14:33:08 crc kubenswrapper[4727]: E1210 14:33:08.565812 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.587110 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.609506 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.631799 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.645058 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.658404 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.666154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.666218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.666235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.666257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.666274 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.672782 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.688485 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.706209 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.719660 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.737895 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.748676 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.759094 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.769434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.769487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.769502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.769532 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.769552 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.772146 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.786190 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.799039 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.810958 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.838916 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.856832 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:08Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.872190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.872246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.872260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.872278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.872291 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.975888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.975989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.976015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.976048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4727]: I1210 14:33:08.976072 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.079290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.079334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.079347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.079367 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.079381 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.182272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.182309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.182318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.182333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.182343 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.285450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.285502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.285513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.285534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.285547 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.388254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.388363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.388377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.388400 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.388417 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.491635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.491685 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.491696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.491714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.491726 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.562372 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:09 crc kubenswrapper[4727]: E1210 14:33:09.562654 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.594846 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.594926 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.594940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.594962 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.594977 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.697650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.697695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.697707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.697739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.697751 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.800727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.800784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.800801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.800826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.800842 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.903836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.903895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.903957 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.903980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4727]: I1210 14:33:09.903994 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.008641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.008704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.008719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.008740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.008767 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.112302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.112347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.112358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.112422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.112436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.215184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.215228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.215238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.215257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.215271 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.318888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.319010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.319023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.319050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.319063 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.422425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.422470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.422480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.422498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.422510 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.526271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.526355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.526374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.526398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.526421 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.562645 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:10 crc kubenswrapper[4727]: E1210 14:33:10.562851 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.562638 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.563077 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:10 crc kubenswrapper[4727]: E1210 14:33:10.563192 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:10 crc kubenswrapper[4727]: E1210 14:33:10.563364 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.607393 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6ph7v_c724a700-1960-4452-9106-d71685d1b38c/kube-multus/0.log" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.607477 4727 generic.go:334] "Generic (PLEG): container finished" podID="c724a700-1960-4452-9106-d71685d1b38c" containerID="9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596" exitCode=1 Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.607541 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6ph7v" event={"ID":"c724a700-1960-4452-9106-d71685d1b38c","Type":"ContainerDied","Data":"9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.609313 4727 scope.go:117] "RemoveContainer" containerID="9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.628426 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.629633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.629697 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.629710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.629730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.629745 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.644333 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.657437 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.692506 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.721646 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.732945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.733026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.733041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.733092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.733107 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.734992 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.748934 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.767394 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.782737 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.801671 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.819283 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.838653 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.839025 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.839061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.839072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.839091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.839104 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.854629 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.866350 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.880990 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.895872 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.909870 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0\\\\n2025-12-10T14:32:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:25Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:25Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:33:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.921257 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.941504 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.941539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.941547 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.941561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4727]: I1210 14:33:10.941571 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.044457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.044503 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.044516 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.044533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.044544 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.147485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.147527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.147539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.147557 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.147571 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.250785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.250851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.250878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.250943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.250970 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.463727 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.463849 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.463891 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.463940 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.463987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464052 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:15.463984974 +0000 UTC m=+159.658759656 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464144 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464220 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:15.464205279 +0000 UTC m=+159.658979821 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464238 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464284 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464318 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464386 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:15.464370303 +0000 UTC m=+159.659144855 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464434 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464520 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:15.464481626 +0000 UTC m=+159.659256178 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464935 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464959 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.464972 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.465034 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:15.465012588 +0000 UTC m=+159.659787300 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.465307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.465345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.465356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.465375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.465389 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.563166 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:11 crc kubenswrapper[4727]: E1210 14:33:11.565082 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.569347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.569395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.569406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.569427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.569451 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.616509 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6ph7v_c724a700-1960-4452-9106-d71685d1b38c/kube-multus/0.log" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.616575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6ph7v" event={"ID":"c724a700-1960-4452-9106-d71685d1b38c","Type":"ContainerStarted","Data":"029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.637222 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.650015 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.661098 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.671776 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.672039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.672063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.672074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.672093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.672105 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.686034 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.698478 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.709945 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.722571 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.736666 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.750769 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0\\\\n2025-12-10T14:32:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:25Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:25Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:33:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.761484 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.773880 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.774708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.774836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.774949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.775069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.775174 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.783402 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.800085 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.818488 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.836742 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.848652 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.859727 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.877841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.877885 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.877894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.877923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.877934 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.981733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.981790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.981807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.981827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4727]: I1210 14:33:11.981839 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.085105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.085167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.085179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.085200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.085213 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.187676 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.187723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.187735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.187755 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.187772 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.291154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.291203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.291213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.291231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.291243 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.395389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.395438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.395448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.395467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.395481 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.499949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.500488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.500500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.500521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.500535 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.562428 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:12 crc kubenswrapper[4727]: E1210 14:33:12.562619 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.562448 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:12 crc kubenswrapper[4727]: E1210 14:33:12.562724 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.562429 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:12 crc kubenswrapper[4727]: E1210 14:33:12.562845 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.604112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.604192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.604234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.604273 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.604299 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.707285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.707341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.707353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.707372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.707384 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.810408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.810467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.810479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.810499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.810512 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.913353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.913404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.913420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.913443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4727]: I1210 14:33:12.913460 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.016554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.016600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.016614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.016637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.016651 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.119255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.119302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.119313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.119332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.119346 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.288184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.288229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.288239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.288256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.288268 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.390950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.391016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.391026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.391064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.391077 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.493610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.493652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.493660 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.493676 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.493688 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.562250 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:13 crc kubenswrapper[4727]: E1210 14:33:13.562450 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.596307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.596353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.596368 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.596387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.596400 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.698795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.698858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.698871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.698892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.698952 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.801619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.801660 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.801669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.801685 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.801696 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.904850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.905234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.905359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.905514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4727]: I1210 14:33:13.905686 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.009352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.009711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.010110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.010457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.010873 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.114571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.115548 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.115785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.116002 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.116189 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.219589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.219627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.219639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.219658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.219671 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.323345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.323665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.323759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.323859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.323999 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.426884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.426949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.426964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.426984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.426997 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.529826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.529873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.529891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.529925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.529938 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.562156 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.562182 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:14 crc kubenswrapper[4727]: E1210 14:33:14.562589 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.562234 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:14 crc kubenswrapper[4727]: E1210 14:33:14.562878 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:14 crc kubenswrapper[4727]: E1210 14:33:14.563065 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.632650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.632683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.632691 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.632705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.632715 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.735872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.735943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.735953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.735966 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.735976 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.838205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.838290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.838337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.838406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.838418 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.942087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.942155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.942288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.942331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4727]: I1210 14:33:14.942350 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.045306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.045356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.045367 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.045390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.045405 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.148715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.148784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.148797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.148816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.148832 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.252476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.252571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.252595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.252630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.252655 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.357796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.358344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.358695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.358970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.359195 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.463540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.463619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.463644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.463677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.463699 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.562806 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:15 crc kubenswrapper[4727]: E1210 14:33:15.563115 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.567111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.567175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.567198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.567230 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.567260 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.670864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.670991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.671005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.671030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.671042 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.773749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.773820 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.773834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.773856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.773873 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.877038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.877106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.877122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.877149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.877171 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.980285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.980347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.980364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.980389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4727]: I1210 14:33:15.980407 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.083834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.083895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.083932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.083954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.083967 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.188563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.188619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.188632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.188650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.188664 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.292673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.292712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.292723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.292740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.292752 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.395531 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.395580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.395594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.395612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.395624 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.499088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.499157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.499172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.499189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.499202 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.562358 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:16 crc kubenswrapper[4727]: E1210 14:33:16.562547 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.562597 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.562698 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:16 crc kubenswrapper[4727]: E1210 14:33:16.562901 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:16 crc kubenswrapper[4727]: E1210 14:33:16.563050 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.582207 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.596386 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.601574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.601653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.601672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.601698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.601717 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.611377 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.628186 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.650304 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.667995 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.680873 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.696478 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.703766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.703954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.704028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.704098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.704156 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.711668 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.724627 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0\\\\n2025-12-10T14:32:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:25Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:25Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:33:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.740116 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.757648 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.770618 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.783480 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.803373 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.807733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.807787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.807798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.807814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.807828 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.825849 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.840076 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.850612 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.910554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.910605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.910616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.910639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4727]: I1210 14:33:16.910652 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.013967 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.014022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.014032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.014051 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.014062 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.117536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.117577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.117587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.117603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.117616 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.220704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.221424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.221506 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.221592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.221659 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.325057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.325344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.325445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.325573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.325680 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.428075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.428122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.428134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.428156 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.428171 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.530811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.530899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.530991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.531018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.531038 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.562209 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:17 crc kubenswrapper[4727]: E1210 14:33:17.562633 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.634870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.634941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.634953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.634971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.634985 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.738566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.738622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.738638 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.738658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.738673 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.843219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.843494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.843516 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.843548 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.843572 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.947376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.947456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.947483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.947508 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4727]: I1210 14:33:17.947527 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.051140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.051217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.051241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.051273 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.051297 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.155443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.155810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.155936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.156059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.156190 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.259891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.259946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.259955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.259974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.259985 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.363299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.363350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.363365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.363390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.363403 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.473266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.473363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.473390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.473420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.473439 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.562398 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:18 crc kubenswrapper[4727]: E1210 14:33:18.562972 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.562526 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.562451 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:18 crc kubenswrapper[4727]: E1210 14:33:18.563331 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:18 crc kubenswrapper[4727]: E1210 14:33:18.563547 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.576537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.576585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.576596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.576615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.576628 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.680031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.680098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.680121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.680147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.680165 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.771522 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.771584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.771597 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.771621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.771633 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: E1210 14:33:18.789043 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.795069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.795111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.795121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.795144 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.795158 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: E1210 14:33:18.811193 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.818403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.818701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.818783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.818884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.819003 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: E1210 14:33:18.834469 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.839717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.839797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.839814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.839833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.839846 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: E1210 14:33:18.853451 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.857940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.857987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.857998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.858015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.858029 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: E1210 14:33:18.870566 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:18 crc kubenswrapper[4727]: E1210 14:33:18.870685 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.872519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.872580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.872593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.872612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.872627 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.975179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.975222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.975230 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.975247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4727]: I1210 14:33:18.975258 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.079894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.080167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.080202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.080238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.080298 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.182856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.183289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.183357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.183488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.183616 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.286969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.287523 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.287648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.287878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.288096 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.391900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.391971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.391983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.392001 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.392017 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.495533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.495593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.495611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.495631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.495643 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.562492 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:19 crc kubenswrapper[4727]: E1210 14:33:19.562713 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.599305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.599360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.599373 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.599393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.599408 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.703170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.703249 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.703266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.703288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.703300 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.807378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.807463 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.807481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.807513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.807537 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.911428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.911548 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.911575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.911659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4727]: I1210 14:33:19.911685 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.015219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.015260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.015273 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.015291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.015302 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.118837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.118893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.118925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.118947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.118977 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.222605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.222680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.222693 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.222721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.222734 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.326817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.326968 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.326980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.326999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.327011 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.430318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.430403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.430416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.430432 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.430443 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.534123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.534606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.534701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.534813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.534900 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.563300 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.563393 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.563304 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:20 crc kubenswrapper[4727]: E1210 14:33:20.563540 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:20 crc kubenswrapper[4727]: E1210 14:33:20.563689 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:20 crc kubenswrapper[4727]: E1210 14:33:20.563860 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.638494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.639022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.639236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.639432 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.639658 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.742650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.742733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.742750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.742778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.742798 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.845972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.846020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.846032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.846052 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.846068 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.949158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.949653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.949775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.949880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4727]: I1210 14:33:20.950155 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.053378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.053428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.053440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.053461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.053478 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.156354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.156445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.156460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.156486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.156505 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.260238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.260282 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.260292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.260307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.260321 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.363289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.364067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.364170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.364255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.364332 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.467780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.468212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.468317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.468447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.468620 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.562266 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:21 crc kubenswrapper[4727]: E1210 14:33:21.562939 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.563228 4727 scope.go:117] "RemoveContainer" containerID="528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.570418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.570476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.570488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.570513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.570529 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.673749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.673797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.673812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.673833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.673846 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.776894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.776955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.776967 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.776984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.776997 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.881654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.881725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.881737 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.881758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.882190 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.985139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.985194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.985205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.985224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4727]: I1210 14:33:21.985234 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.089154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.089209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.089220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.089245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.089258 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:22Z","lastTransitionTime":"2025-12-10T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.192390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.192468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.192486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.192525 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.192542 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:22Z","lastTransitionTime":"2025-12-10T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.295642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.295701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.295718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.295739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.295749 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:22Z","lastTransitionTime":"2025-12-10T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.398829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.398892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.398940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.398970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.398989 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:22Z","lastTransitionTime":"2025-12-10T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.502348 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.502405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.502415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.502434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.502449 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:22Z","lastTransitionTime":"2025-12-10T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.563249 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.563326 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:22 crc kubenswrapper[4727]: E1210 14:33:22.563518 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.563648 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:22 crc kubenswrapper[4727]: E1210 14:33:22.563745 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:22 crc kubenswrapper[4727]: E1210 14:33:22.563859 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.579314 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:22 crc kubenswrapper[4727]: E1210 14:33:22.579612 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:33:22 crc kubenswrapper[4727]: E1210 14:33:22.579707 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs podName:2bcea03d-69bd-4530-91b9-ca3ba1ffc871 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.579688129 +0000 UTC m=+170.774462671 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs") pod "network-metrics-daemon-wwmwn" (UID: "2bcea03d-69bd-4530-91b9-ca3ba1ffc871") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.606465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.606536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.606550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.606578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.606593 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:22Z","lastTransitionTime":"2025-12-10T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.665734 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/2.log" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.669095 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.669826 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.687209 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.704316 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.716127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.716187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.716201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.716227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.716243 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:22Z","lastTransitionTime":"2025-12-10T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.725734 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.746557 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.758572 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.777342 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.796374 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.811104 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.819292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.819333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.819342 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.819358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.819370 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:22Z","lastTransitionTime":"2025-12-10T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.824885 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.838974 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.855108 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0\\\\n2025-12-10T14:32:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:25Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:25Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:33:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.868077 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.878569 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.900064 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.913805 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.921435 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.921483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.921496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.921517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.921531 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:22Z","lastTransitionTime":"2025-12-10T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.935787 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.959294 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:22 crc kubenswrapper[4727]: I1210 14:33:22.971464 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:22Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.024712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.024749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.024757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.024773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.024783 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:23Z","lastTransitionTime":"2025-12-10T14:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.128115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.128189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.128202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.128227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.128241 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:23Z","lastTransitionTime":"2025-12-10T14:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.231293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.231668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.231683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.231701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.231713 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:23Z","lastTransitionTime":"2025-12-10T14:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.334591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.334639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.334650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.334668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.334678 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:23Z","lastTransitionTime":"2025-12-10T14:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.510295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.510388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.510592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.510620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.510636 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:23Z","lastTransitionTime":"2025-12-10T14:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.562073 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:23 crc kubenswrapper[4727]: E1210 14:33:23.562285 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.613774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.613821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.613832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.613851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.613863 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:23Z","lastTransitionTime":"2025-12-10T14:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.716667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.716705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.716714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.716729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.716740 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:23Z","lastTransitionTime":"2025-12-10T14:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.819188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.819220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.819228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.819242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.819252 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:23Z","lastTransitionTime":"2025-12-10T14:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.922720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.922777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.922793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.922814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:23 crc kubenswrapper[4727]: I1210 14:33:23.922825 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:23Z","lastTransitionTime":"2025-12-10T14:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.025954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.026015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.026029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.026049 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.026066 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.129445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.129491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.129500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.129517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.129528 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.233761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.233837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.233848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.233870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.233882 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.336224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.336258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.336267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.336284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.336297 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.439316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.439352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.439361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.439379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.439391 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.541815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.541873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.541893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.541945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.541965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.562032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.562184 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.562299 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:24 crc kubenswrapper[4727]: E1210 14:33:24.562313 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:24 crc kubenswrapper[4727]: E1210 14:33:24.562457 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:24 crc kubenswrapper[4727]: E1210 14:33:24.562614 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.644489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.644556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.644574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.644598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.644612 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.753588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.753659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.753677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.753704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.753729 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.857465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.857518 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.857532 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.857551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.857565 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.960017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.960050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.960060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.960074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:24 crc kubenswrapper[4727]: I1210 14:33:24.960087 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:24Z","lastTransitionTime":"2025-12-10T14:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.062705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.062757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.062769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.062789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.062803 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.165599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.165648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.165657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.165673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.165683 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.269412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.269489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.269514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.269547 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.269571 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.373190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.373245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.373256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.373278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.373292 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.476054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.476117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.476134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.476156 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.476173 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.563026 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:25 crc kubenswrapper[4727]: E1210 14:33:25.563210 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.579473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.579519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.579531 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.579549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.579563 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.683223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.683286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.683299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.683319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.683337 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.786932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.787020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.787041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.787071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.787090 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.890323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.890372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.890383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.890404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.890415 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.993880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.993958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.993972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.993993 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:25 crc kubenswrapper[4727]: I1210 14:33:25.994006 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:25Z","lastTransitionTime":"2025-12-10T14:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.096100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.096141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.096149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.096164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.096185 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:26Z","lastTransitionTime":"2025-12-10T14:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.199100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.199139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.199147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.199161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.199173 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:26Z","lastTransitionTime":"2025-12-10T14:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.301406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.301444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.301452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.301468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.301477 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:26Z","lastTransitionTime":"2025-12-10T14:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.405985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.406040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.406062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.406084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:26 crc kubenswrapper[4727]: I1210 14:33:26.406099 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:26Z","lastTransitionTime":"2025-12-10T14:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.529526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.529583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.529593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.529610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.529620 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:27Z","lastTransitionTime":"2025-12-10T14:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.532819 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:27 crc kubenswrapper[4727]: E1210 14:33:27.532994 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.533421 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:27 crc kubenswrapper[4727]: E1210 14:33:27.533540 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.533700 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:27 crc kubenswrapper[4727]: E1210 14:33:27.533760 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.534201 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:27 crc kubenswrapper[4727]: E1210 14:33:27.534408 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.535791 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/3.log" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.542612 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/2.log" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.547595 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" exitCode=1 Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.547665 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.547751 4727 scope.go:117] "RemoveContainer" containerID="528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.548986 4727 scope.go:117] "RemoveContainer" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:33:27 crc kubenswrapper[4727]: E1210 14:33:27.549315 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.551386 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.566774 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.589416 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.605335 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.618741 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.631763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.631810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.631822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.631839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.631851 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:27Z","lastTransitionTime":"2025-12-10T14:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.636061 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.652171 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.669415 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.685179 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.707342 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.723771 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.734497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.734548 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.734559 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.734577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.734589 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:27Z","lastTransitionTime":"2025-12-10T14:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.736660 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.750419 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.768190 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.785819 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0\\\\n2025-12-10T14:32:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:25Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:25Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:33:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.800207 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.816985 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.830901 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.836926 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.836979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.836991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.837012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.837030 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:27Z","lastTransitionTime":"2025-12-10T14:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.849021 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.863780 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.879047 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.895747 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.914932 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0\\\\n2025-12-10T14:32:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:25Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:25Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:33:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.930173 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.940252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.940300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.940315 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.940338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.940353 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:27Z","lastTransitionTime":"2025-12-10T14:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.948438 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.960304 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.975386 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:27 crc kubenswrapper[4727]: I1210 14:33:27.991031 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c36463c0-3be0-4847-806b-00299265997d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9760fae34882c2d667494ea8e6be195227d998f0a582f3bd6de65ddc122d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f54dcc8b8184353685144518200440ca7fe027ce457ae0cbb85f6bac6935fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5861a48edbc265757ef603e83ebb8a092643fd4a59b0e574a7adffea2d8883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e15777c9568a8c26a8f6559505b878af76013b6c62ad70c8bde7eb2dab957\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.014812 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.042224 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:25Z\\\",\\\"message\\\":\\\"te/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:33:24.845870 6845 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:33:24.845944 6845 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:33:24.846013 6845 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:33:24.846611 6845 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:33:24.846649 6845 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 14:33:24.846656 6845 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 14:33:24.846682 6845 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 14:33:24.846696 6845 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:33:24.846704 6845 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 14:33:24.846747 6845 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 14:33:24.846836 6845 factory.go:656] Stopping watch factory\\\\nI1210 14:33:24.846862 6845 ovnkube.go:599] Stopped ovnkube\\\\nI1210 14:33:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.043868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.043938 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.043954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.043980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.043992 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.059315 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.072515 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.090370 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.105090 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.120494 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.135063 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.146720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.146769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.146785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.146809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.146831 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.157469 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.178146 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.250550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.250610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.250621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.250650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.250662 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.355206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.355265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.355276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.355334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.355353 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.459542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.459609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.459620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.459640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.459658 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.553284 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/3.log" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.561410 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.561462 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.561474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.561492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.561505 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.664873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.664958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.664972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.664998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.665016 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.767553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.767603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.767639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.767661 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.767680 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.871517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.871590 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.871599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.871619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.871852 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.975468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.975797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.975813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.975833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:28 crc kubenswrapper[4727]: I1210 14:33:28.975846 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:28Z","lastTransitionTime":"2025-12-10T14:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.079075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.079145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.079163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.079189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.079208 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.146207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.146262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.146279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.146297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.146311 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.161935 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:29Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.166283 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.166322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.166333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.166352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.166365 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.182223 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:29Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.186278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.186317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.186332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.186352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.186364 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.199767 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:29Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.203665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.203704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.203719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.203739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.203752 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.217970 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:29Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.222124 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.222167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.222178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.222219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.222232 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.239451 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:29Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.239725 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.242398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.242480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.242496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.242523 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.242542 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.346690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.346740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.346754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.346774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.346787 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.450199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.450273 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.450285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.450306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.450318 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.553301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.553379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.553397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.553424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.553443 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.562338 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.562343 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.562340 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.562529 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.562739 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.562847 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.563100 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:29 crc kubenswrapper[4727]: E1210 14:33:29.563303 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.657207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.657285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.657302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.657328 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.657346 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.763508 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.763595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.763606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.763628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.763639 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.867495 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.868012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.868022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.868040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.868051 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.972019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.972261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.972278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.972300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:29 crc kubenswrapper[4727]: I1210 14:33:29.972322 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:29Z","lastTransitionTime":"2025-12-10T14:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.075386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.075443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.075460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.075484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.075502 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:30Z","lastTransitionTime":"2025-12-10T14:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.179888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.179983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.179997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.180020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.180036 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:30Z","lastTransitionTime":"2025-12-10T14:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.284098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.284147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.284161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.284181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.284195 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:30Z","lastTransitionTime":"2025-12-10T14:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.387174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.387238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.387251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.387270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.387281 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:30Z","lastTransitionTime":"2025-12-10T14:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.490190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.490259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.490274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.490293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.490471 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:30Z","lastTransitionTime":"2025-12-10T14:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.594159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.594201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.594210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.594226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.594238 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:30Z","lastTransitionTime":"2025-12-10T14:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.696798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.696864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.696879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.696933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.696949 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:30Z","lastTransitionTime":"2025-12-10T14:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.799940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.799986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.799995 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.800011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.800022 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:30Z","lastTransitionTime":"2025-12-10T14:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.903328 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.903404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.903424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.903450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:30 crc kubenswrapper[4727]: I1210 14:33:30.903470 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:30Z","lastTransitionTime":"2025-12-10T14:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.006119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.006172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.006218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.006238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.006251 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.109824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.109871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.109880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.109897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.109924 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.213060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.213104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.213113 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.213128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.213138 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.315731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.315791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.315804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.315825 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.315843 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.418665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.418717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.418728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.418747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.418761 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.522127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.522184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.522197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.522221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.522236 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.562982 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.563040 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.562981 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.563190 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:31 crc kubenswrapper[4727]: E1210 14:33:31.563382 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:31 crc kubenswrapper[4727]: E1210 14:33:31.563483 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:31 crc kubenswrapper[4727]: E1210 14:33:31.563571 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:31 crc kubenswrapper[4727]: E1210 14:33:31.563827 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.624965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.625035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.625058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.625084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.625099 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.727398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.727448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.727456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.727471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.727482 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.831184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.831245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.831257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.831278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.831290 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.934179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.934236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.934248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.934264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:31 crc kubenswrapper[4727]: I1210 14:33:31.934276 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:31Z","lastTransitionTime":"2025-12-10T14:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.038512 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.038579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.038592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.038614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.038627 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.141246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.141295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.141303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.141319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.141333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.245020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.245088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.245100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.245120 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.245130 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.348153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.348206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.348217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.348239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.348260 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.451932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.451978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.451987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.452003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.452013 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.554583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.554627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.554637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.554652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.554662 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.658064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.658201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.658242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.658263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.658278 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.761267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.761328 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.761342 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.761361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.761376 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.866122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.866188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.866200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.866223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.866237 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.969289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.969327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.969335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.969350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:32 crc kubenswrapper[4727]: I1210 14:33:32.969360 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:32Z","lastTransitionTime":"2025-12-10T14:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.072594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.072664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.072678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.072707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.072725 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:33Z","lastTransitionTime":"2025-12-10T14:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.177424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.177485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.177498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.177520 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.177534 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:33Z","lastTransitionTime":"2025-12-10T14:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.280816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.280856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.280868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.280886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.280899 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:33Z","lastTransitionTime":"2025-12-10T14:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.383648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.383686 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.383698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.383716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.383727 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:33Z","lastTransitionTime":"2025-12-10T14:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.487535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.487579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.487590 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.487606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.487617 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:33Z","lastTransitionTime":"2025-12-10T14:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.562356 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.562493 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:33 crc kubenswrapper[4727]: E1210 14:33:33.562744 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.563160 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:33 crc kubenswrapper[4727]: E1210 14:33:33.563278 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.563422 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:33 crc kubenswrapper[4727]: E1210 14:33:33.562521 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:33 crc kubenswrapper[4727]: E1210 14:33:33.563517 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.590774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.590870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.590901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.590977 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.591015 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:33Z","lastTransitionTime":"2025-12-10T14:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.694318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.694365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.694377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.694396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.694409 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:33Z","lastTransitionTime":"2025-12-10T14:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.802206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.802272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.802282 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.802303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.802315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:33Z","lastTransitionTime":"2025-12-10T14:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.905149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.905229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.905252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.905281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:33 crc kubenswrapper[4727]: I1210 14:33:33.905304 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:33Z","lastTransitionTime":"2025-12-10T14:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.008288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.008351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.008365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.008391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.008406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.111105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.111167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.111234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.111255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.111270 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.214359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.214402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.214412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.214428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.214441 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.318041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.318119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.318137 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.318157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.318170 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.421317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.421374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.421387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.421409 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.421424 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.524168 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.524234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.524247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.524267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.524284 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.627413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.627492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.627509 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.627535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.627550 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.731064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.731116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.731128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.731147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.731159 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.833948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.834011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.834047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.834071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.834102 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.936863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.936945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.936957 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.936977 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:34 crc kubenswrapper[4727]: I1210 14:33:34.936990 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:34Z","lastTransitionTime":"2025-12-10T14:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.040843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.040934 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.040947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.040965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.040980 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.144722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.144840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.144862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.144891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.144952 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.248138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.248197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.248208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.248231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.248243 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.356692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.356746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.356761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.356785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.356800 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.460125 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.460181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.460193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.460211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.460223 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.562120 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.562188 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.562274 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.562332 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:35 crc kubenswrapper[4727]: E1210 14:33:35.562527 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:35 crc kubenswrapper[4727]: E1210 14:33:35.562653 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:35 crc kubenswrapper[4727]: E1210 14:33:35.562774 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:35 crc kubenswrapper[4727]: E1210 14:33:35.562985 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.563758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.563803 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.563818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.563839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.563854 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.666989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.667041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.667051 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.667068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.667081 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.769961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.770025 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.770036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.770061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.770073 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.873428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.873506 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.873527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.873555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.873576 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.976805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.976869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.976882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.976926 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:35 crc kubenswrapper[4727]: I1210 14:33:35.976939 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:35Z","lastTransitionTime":"2025-12-10T14:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.079642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.079717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.079729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.079751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.079764 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:36Z","lastTransitionTime":"2025-12-10T14:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.182742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.182778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.182786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.182802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.182812 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:36Z","lastTransitionTime":"2025-12-10T14:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.285596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.285646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.285655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.285672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.285683 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:36Z","lastTransitionTime":"2025-12-10T14:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.388564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.388617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.388626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.388644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.388656 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:36Z","lastTransitionTime":"2025-12-10T14:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:36 crc kubenswrapper[4727]: E1210 14:33:36.489648 4727 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.579179 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.594313 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.609732 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.626860 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.641407 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.661164 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.679022 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.690015 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.707474 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.722566 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.736621 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0\\\\n2025-12-10T14:32:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:25Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:25Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:33:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.751784 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.767394 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c36463c0-3be0-4847-806b-00299265997d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9760fae34882c2d667494ea8e6be195227d998f0a582f3bd6de65ddc122d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f54dcc8b8184353685144518200440ca7fe027ce457ae0cbb85f6bac6935fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5861a48edbc265757ef603e83ebb8a092643fd4a59b0e574a7adffea2d8883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e15777c9568a8c26a8f6559505b878af76013b6c62ad70c8bde7eb2dab957\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.782683 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.800546 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.814992 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.842355 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.864556 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://528a8dc624a9dbb647065b5be86d64dbe8cef681cd2adf86413f7c8fe7f35b33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"kg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:53.262099 6535 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:32:53.262160 6535 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 14:32:53.262167 6535 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 14:32:53.262174 6535 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:32:53.262233 6535 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 14:32:53.262260 6535 factory.go:656] Stopping watch factory\\\\nI1210 14:32:53.262286 6535 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 14:32:53.262298 6535 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:32:53.262304 6535 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 14:32:53.262311 6535 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 14:32:53.262397 6535 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:53.262741 6535 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:25Z\\\",\\\"message\\\":\\\"te/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:33:24.845870 6845 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:33:24.845944 6845 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:33:24.846013 6845 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:33:24.846611 6845 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:33:24.846649 6845 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 14:33:24.846656 6845 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 14:33:24.846682 6845 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 14:33:24.846696 6845 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:33:24.846704 6845 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 14:33:24.846747 6845 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 14:33:24.846836 6845 factory.go:656] Stopping watch factory\\\\nI1210 14:33:24.846862 6845 ovnkube.go:599] Stopped ovnkube\\\\nI1210 14:33:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:36 crc kubenswrapper[4727]: I1210 14:33:36.881365 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:37 crc kubenswrapper[4727]: E1210 14:33:37.533638 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:37 crc kubenswrapper[4727]: I1210 14:33:37.563339 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:37 crc kubenswrapper[4727]: I1210 14:33:37.563463 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:37 crc kubenswrapper[4727]: I1210 14:33:37.563463 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:37 crc kubenswrapper[4727]: I1210 14:33:37.563642 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:37 crc kubenswrapper[4727]: E1210 14:33:37.563662 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:37 crc kubenswrapper[4727]: E1210 14:33:37.563837 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:37 crc kubenswrapper[4727]: E1210 14:33:37.563935 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:37 crc kubenswrapper[4727]: E1210 14:33:37.564360 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.564149 4727 scope.go:117] "RemoveContainer" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:33:38 crc kubenswrapper[4727]: E1210 14:33:38.564601 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.581658 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.599189 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf648577f8e77114d5e7f06dc28cbe48d8dc086a07cd994f22b9ff7d0c295ee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659ad81746f2a5f7636b157b988948686f54a1749cf2f240b224db0024aba74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.615320 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6ph7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c724a700-1960-4452-9106-d71685d1b38c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:10Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0\\\\n2025-12-10T14:32:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f37c2a54-da17-44d1-bb45-e95c893f03b0 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:25Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:25Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:33:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjvwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6ph7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.631011 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5bh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wwmwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.648777 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1210 14:31:54.462447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:54.464025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1808038604/tls.crt::/tmp/serving-cert-1808038604/tls.key\\\\\\\"\\\\nI1210 14:32:00.813279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:32:00.819191 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:32:00.819230 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:32:00.819293 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:32:00.819299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:32:00.830958 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:32:00.830986 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:32:00.830998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:32:00.831002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:32:00.831006 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:32:00.831010 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:32:00.831204 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:32:00.833840 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.666353 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cstpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84a1f7a-5938-4bec-9ff5-5033db566f4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3348a002197f0f12498142cd82f2f8d3163c94de8f5b6e6f69cd75f1aff4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj59p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cstpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.686101 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.703524 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c36463c0-3be0-4847-806b-00299265997d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9760fae34882c2d667494ea8e6be195227d998f0a582f3bd6de65ddc122d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f54dcc8b8184353685144518200440ca7fe027ce457ae0cbb85f6bac6935fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5861a48edbc265757ef603e83ebb8a092643fd4a59b0e574a7adffea2d8883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e15777c9568a8c26a8f6559505b878af76013b6c62ad70c8bde7eb2dab957\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.722658 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pq9t7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bd8788d-8022-4502-9181-8d4048712c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d26939fa6161d346807ba4eabda24c4c3e444b7513874557f1a58b668a18f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pq9t7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.744407 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:25Z\\\",\\\"message\\\":\\\"te/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:33:24.845870 6845 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:33:24.845944 6845 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1210 14:33:24.846013 6845 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:33:24.846611 6845 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 14:33:24.846649 6845 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 14:33:24.846656 6845 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 14:33:24.846682 6845 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 14:33:24.846696 6845 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 14:33:24.846704 6845 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 14:33:24.846747 6845 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 14:33:24.846836 6845 factory.go:656] Stopping watch factory\\\\nI1210 14:33:24.846862 6845 ovnkube.go:599] Stopped ovnkube\\\\nI1210 14:33:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:33:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k8b7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.761880 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2251dee-3373-4fb3-b1cd-56003fa83f22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168c837b92970e06fc09104a733dcb9b425d44d67725c8c81b2cfb31dfaa1b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7226d6621fc7e975aaf6e1c05400c8cd866797e849ee8f14092935dc39f056c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fz6ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xnpwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.779274 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dde5dc5-b6ff-49be-870f-a8b04430ef3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815aab5c9a40b70a41cbf02e8e219ef934e21cf097b8ce3d7507ab1d43809a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b3f71632cefa03859c8c599dcd465d98008223c7b5e58b72325de89a4ae877\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.807759 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7024ca23-6f38-49bb-ba03-b68bbe28204d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e274851f99977da7e9cf67e949f0ca072dc6621914ecddba9778fdfca4890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d00cd326bdadaf828c87f3de300e3734078b9e6b4248c583755653611bea702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d9e6dbcaea461c31eb44e844655898d6a3bfa039d1104dd762c046d30b1f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cc808827af2142e71916784e9d42138f878bb443c5b88b10a048268577da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a010e72ae83ee1b20433f438efd67006d4bf039011cfa9c84ae5d19c4cfacf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b04f5c8de6d9d70c62fa9f156f9be1cbdc6be24ed4b296ca4e0ad850854f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b43a7922cf18dda13cd5a9a72299f6dd5655dc8ca6d61806ea1f48af482b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf396ec17501441a334dbcc4df8c1cb43b7f90bcaac8c1661451dfca68d7ea7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.821719 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6a2a163b49bda169909187f0134d6abda07714d47fdbe628cad658bf0a96b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.835565 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.856668 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe1deb75-3aeb-4657-a335-fd4c02a2a513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80cd00d92195edc1bf604a1aada2bf35345c3a569af9e4f95fe4b55a6088642f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xw98j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5kj8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.875432 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e83cbea-272d-4fcf-a39b-f2a60adbfb9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b08d76f180839dff5ad076cb6524f9a2c1c189a86e2095241f8c6e92907d0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7c4d14d99960f2347c660884dd8c924d31d0cf802b8a876ccfe59750278d70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336f720cf54291cd3fdc21b838281656fe1bfa6a59574751aacbd20854c447c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9cdfd8dd6c6841c4b08af6641aae6c37a70a590cc88ab1bda3162bdf41b04e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1190fb8752e4aa08337737e3e76508063312abbe5eb6dfb6bda1b1b395e3318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea7ecf06d114056023a25b56556ed82649364b9dfc4fab8a1ba50b2aaeb4955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9d027a066a5c1b4ce22b4a083a4e891d68888eaa5085c7ac73ff25beaf9bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc2gk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nn8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.894711 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f19f247-7a99-4140-b5c3-90bf5a82d53f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8691a2f8e8c829dd6aeb1fe2897fa4cd48290d63c7ad2938ada354a8aa11326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5fb702de257a51cfac0952666d94d4973348eb90f52dd91d71fe7413cdb3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94a147d10d0d22dcb950824dc0f7b0c9bbd6d4e05e5bbf15f31a1972cc9cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658ea896b54c3a899400c9dbd583ee5fcc9863c79dfc16468fea2e1b987671ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:38 crc kubenswrapper[4727]: I1210 14:33:38.913368 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa1daf0bd93e316ebe8aefeb84c3b97c8415af60df545c8465694e717c5cf28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:38Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.562666 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.562707 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.562747 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.562771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.562815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.562828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.562821 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.562849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.562869 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:39Z","lastTransitionTime":"2025-12-10T14:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.563078 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.563110 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.563350 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.563512 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.579210 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:39Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.584565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.584595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.584607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.584622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.584631 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:39Z","lastTransitionTime":"2025-12-10T14:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.597325 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:39Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.601662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.601691 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.601702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.601718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.601728 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:39Z","lastTransitionTime":"2025-12-10T14:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.616643 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:39Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.622076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.622390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.622477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.622558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.622633 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:39Z","lastTransitionTime":"2025-12-10T14:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.638582 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:39Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.643890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.643958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.643969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.643988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:39 crc kubenswrapper[4727]: I1210 14:33:39.644004 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:39Z","lastTransitionTime":"2025-12-10T14:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.657556 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2fe4a460-78b2-4a5f-9ad2-0b44f4da4f44\\\",\\\"systemUUID\\\":\\\"bd5527bc-7a25-4b1d-868d-d32d9da06147\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:39Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:39 crc kubenswrapper[4727]: E1210 14:33:39.657729 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:33:41 crc kubenswrapper[4727]: I1210 14:33:41.562381 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:41 crc kubenswrapper[4727]: I1210 14:33:41.562514 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:41 crc kubenswrapper[4727]: E1210 14:33:41.563156 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:41 crc kubenswrapper[4727]: I1210 14:33:41.562565 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:41 crc kubenswrapper[4727]: I1210 14:33:41.562513 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:41 crc kubenswrapper[4727]: E1210 14:33:41.563382 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:41 crc kubenswrapper[4727]: E1210 14:33:41.563439 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:41 crc kubenswrapper[4727]: E1210 14:33:41.563525 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:42 crc kubenswrapper[4727]: E1210 14:33:42.535124 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:43 crc kubenswrapper[4727]: I1210 14:33:43.562744 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:43 crc kubenswrapper[4727]: I1210 14:33:43.562811 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:43 crc kubenswrapper[4727]: I1210 14:33:43.562744 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:43 crc kubenswrapper[4727]: I1210 14:33:43.564528 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:43 crc kubenswrapper[4727]: E1210 14:33:43.564810 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:43 crc kubenswrapper[4727]: E1210 14:33:43.564989 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:43 crc kubenswrapper[4727]: E1210 14:33:43.565210 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:43 crc kubenswrapper[4727]: E1210 14:33:43.565488 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:45 crc kubenswrapper[4727]: I1210 14:33:45.562193 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:45 crc kubenswrapper[4727]: E1210 14:33:45.562404 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:45 crc kubenswrapper[4727]: I1210 14:33:45.562678 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:45 crc kubenswrapper[4727]: E1210 14:33:45.562748 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:45 crc kubenswrapper[4727]: I1210 14:33:45.562890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:45 crc kubenswrapper[4727]: E1210 14:33:45.562999 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:45 crc kubenswrapper[4727]: I1210 14:33:45.563181 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:45 crc kubenswrapper[4727]: E1210 14:33:45.563258 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.623545 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xnpwt" podStartSLOduration=104.623469293 podStartE2EDuration="1m44.623469293s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.623433642 +0000 UTC m=+130.818208174" watchObservedRunningTime="2025-12-10 14:33:46.623469293 +0000 UTC m=+130.818243835" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.639170 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=52.639145795 podStartE2EDuration="52.639145795s" podCreationTimestamp="2025-12-10 14:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.638558102 +0000 UTC m=+130.833332644" watchObservedRunningTime="2025-12-10 14:33:46.639145795 +0000 UTC m=+130.833920337" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.687683 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=50.687649537 podStartE2EDuration="50.687649537s" podCreationTimestamp="2025-12-10 14:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.671750289 +0000 UTC m=+130.866524831" watchObservedRunningTime="2025-12-10 14:33:46.687649537 +0000 UTC m=+130.882424079" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.723014 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podStartSLOduration=104.722982513 podStartE2EDuration="1m44.722982513s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.72198785 +0000 UTC m=+130.916762452" watchObservedRunningTime="2025-12-10 14:33:46.722982513 +0000 UTC m=+130.917757055" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.775410 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nn8cx" podStartSLOduration=104.775376485 podStartE2EDuration="1m44.775376485s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.755397313 +0000 UTC m=+130.950171855" watchObservedRunningTime="2025-12-10 14:33:46.775376485 +0000 UTC m=+130.970151027" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.775733 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=73.775727843 podStartE2EDuration="1m13.775727843s" podCreationTimestamp="2025-12-10 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.775540808 +0000 UTC m=+130.970315370" watchObservedRunningTime="2025-12-10 14:33:46.775727843 +0000 UTC m=+130.970502385" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.856815 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6ph7v" podStartSLOduration=104.856791777 podStartE2EDuration="1m44.856791777s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.855868695 +0000 UTC m=+131.050643257" watchObservedRunningTime="2025-12-10 14:33:46.856791777 +0000 UTC m=+131.051566309" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.892736 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.892713687 podStartE2EDuration="1m30.892713687s" podCreationTimestamp="2025-12-10 14:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.891222313 +0000 UTC m=+131.085996855" watchObservedRunningTime="2025-12-10 14:33:46.892713687 +0000 UTC m=+131.087488229" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.908663 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cstpp" podStartSLOduration=105.908632495 podStartE2EDuration="1m45.908632495s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.908573214 +0000 UTC m=+131.103347756" watchObservedRunningTime="2025-12-10 14:33:46.908632495 +0000 UTC m=+131.103407037" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.942829 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=19.942783365 podStartE2EDuration="19.942783365s" podCreationTimestamp="2025-12-10 14:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.941777081 +0000 UTC m=+131.136551623" watchObservedRunningTime="2025-12-10 14:33:46.942783365 +0000 UTC m=+131.137557907" Dec 10 14:33:46 crc kubenswrapper[4727]: I1210 14:33:46.955388 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pq9t7" podStartSLOduration=105.955356655 podStartE2EDuration="1m45.955356655s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:46.95469031 +0000 UTC m=+131.149464852" watchObservedRunningTime="2025-12-10 14:33:46.955356655 +0000 UTC m=+131.150131197" Dec 10 14:33:47 crc kubenswrapper[4727]: E1210 14:33:47.537296 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:47 crc kubenswrapper[4727]: I1210 14:33:47.562179 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:47 crc kubenswrapper[4727]: I1210 14:33:47.562249 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:47 crc kubenswrapper[4727]: I1210 14:33:47.562355 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:47 crc kubenswrapper[4727]: E1210 14:33:47.562529 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:47 crc kubenswrapper[4727]: I1210 14:33:47.562928 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:47 crc kubenswrapper[4727]: E1210 14:33:47.563129 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:47 crc kubenswrapper[4727]: E1210 14:33:47.562950 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:47 crc kubenswrapper[4727]: E1210 14:33:47.563442 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.562220 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.562290 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.562393 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:49 crc kubenswrapper[4727]: E1210 14:33:49.562408 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.562472 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:49 crc kubenswrapper[4727]: E1210 14:33:49.562568 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:49 crc kubenswrapper[4727]: E1210 14:33:49.562731 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:49 crc kubenswrapper[4727]: E1210 14:33:49.562843 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.755352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.755408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.755418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.755437 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.755447 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:49Z","lastTransitionTime":"2025-12-10T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.823733 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn"] Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.824553 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.826972 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.828233 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.828300 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 10 14:33:49 crc kubenswrapper[4727]: I1210 14:33:49.830402 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.034731 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.034841 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.034883 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.034929 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.035017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.135823 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.135972 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.136025 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.136069 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.136131 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.136200 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.136293 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.137053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.147637 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.160735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b8aaa5-fd37-4860-afae-23dbc9e6ff1a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-svxwn\" (UID: \"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.441720 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.652141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" event={"ID":"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a","Type":"ContainerStarted","Data":"4d5a9aa1a514a49ee2a8e6a5877cc9ffedab9477f36b3fa677610b0580c8f375"} Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.652248 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" event={"ID":"35b8aaa5-fd37-4860-afae-23dbc9e6ff1a","Type":"ContainerStarted","Data":"e206b2d49b5f16e565b9a2d6f1ae95e534674c086fb80d39908d884657b70c59"} Dec 10 14:33:50 crc kubenswrapper[4727]: I1210 14:33:50.690750 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-svxwn" podStartSLOduration=108.69070503 podStartE2EDuration="1m48.69070503s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:50.687947482 +0000 UTC m=+134.882722034" watchObservedRunningTime="2025-12-10 14:33:50.69070503 +0000 UTC m=+134.885479572" Dec 10 14:33:51 crc kubenswrapper[4727]: I1210 14:33:51.562997 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:51 crc kubenswrapper[4727]: I1210 14:33:51.562997 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:51 crc kubenswrapper[4727]: I1210 14:33:51.563129 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:51 crc kubenswrapper[4727]: E1210 14:33:51.563185 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:51 crc kubenswrapper[4727]: E1210 14:33:51.563227 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:51 crc kubenswrapper[4727]: I1210 14:33:51.563033 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:51 crc kubenswrapper[4727]: E1210 14:33:51.563308 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:51 crc kubenswrapper[4727]: E1210 14:33:51.563511 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:52 crc kubenswrapper[4727]: E1210 14:33:52.538606 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:53 crc kubenswrapper[4727]: I1210 14:33:53.562860 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:53 crc kubenswrapper[4727]: I1210 14:33:53.562962 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:53 crc kubenswrapper[4727]: I1210 14:33:53.563364 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:53 crc kubenswrapper[4727]: I1210 14:33:53.563409 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:53 crc kubenswrapper[4727]: E1210 14:33:53.563571 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:53 crc kubenswrapper[4727]: E1210 14:33:53.563651 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:53 crc kubenswrapper[4727]: E1210 14:33:53.563786 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:53 crc kubenswrapper[4727]: E1210 14:33:53.564357 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:53 crc kubenswrapper[4727]: I1210 14:33:53.564680 4727 scope.go:117] "RemoveContainer" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:33:53 crc kubenswrapper[4727]: E1210 14:33:53.564920 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k8b7p_openshift-ovn-kubernetes(5b9f88bc-1b6e-4dd4-9d6e-febdde2facba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" Dec 10 14:33:55 crc kubenswrapper[4727]: I1210 14:33:55.562255 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:55 crc kubenswrapper[4727]: I1210 14:33:55.562300 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:55 crc kubenswrapper[4727]: I1210 14:33:55.562394 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:55 crc kubenswrapper[4727]: E1210 14:33:55.562442 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:55 crc kubenswrapper[4727]: I1210 14:33:55.562548 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:55 crc kubenswrapper[4727]: E1210 14:33:55.562708 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:55 crc kubenswrapper[4727]: E1210 14:33:55.562765 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:55 crc kubenswrapper[4727]: E1210 14:33:55.562884 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:57 crc kubenswrapper[4727]: E1210 14:33:57.541004 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.562259 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.562343 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.562408 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:57 crc kubenswrapper[4727]: E1210 14:33:57.562402 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:57 crc kubenswrapper[4727]: E1210 14:33:57.562514 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.562581 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:57 crc kubenswrapper[4727]: E1210 14:33:57.562703 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:57 crc kubenswrapper[4727]: E1210 14:33:57.562852 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.684890 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6ph7v_c724a700-1960-4452-9106-d71685d1b38c/kube-multus/1.log" Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.685738 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6ph7v_c724a700-1960-4452-9106-d71685d1b38c/kube-multus/0.log" Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.685830 4727 generic.go:334] "Generic (PLEG): container finished" podID="c724a700-1960-4452-9106-d71685d1b38c" containerID="029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f" exitCode=1 Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.685886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6ph7v" event={"ID":"c724a700-1960-4452-9106-d71685d1b38c","Type":"ContainerDied","Data":"029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f"} Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.686040 4727 scope.go:117] "RemoveContainer" containerID="9913e3aef9f2ce0769b7ab2e99d3c70fa011c225ef91cbd2d934f99ce3111596" Dec 10 14:33:57 crc kubenswrapper[4727]: I1210 14:33:57.686698 4727 scope.go:117] "RemoveContainer" containerID="029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f" Dec 10 14:33:57 crc kubenswrapper[4727]: E1210 14:33:57.687023 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6ph7v_openshift-multus(c724a700-1960-4452-9106-d71685d1b38c)\"" pod="openshift-multus/multus-6ph7v" podUID="c724a700-1960-4452-9106-d71685d1b38c" Dec 10 14:33:58 crc kubenswrapper[4727]: I1210 14:33:58.691590 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6ph7v_c724a700-1960-4452-9106-d71685d1b38c/kube-multus/1.log" Dec 10 14:33:59 crc kubenswrapper[4727]: I1210 14:33:59.562721 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:59 crc kubenswrapper[4727]: I1210 14:33:59.562746 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:59 crc kubenswrapper[4727]: I1210 14:33:59.562816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:59 crc kubenswrapper[4727]: I1210 14:33:59.562824 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:33:59 crc kubenswrapper[4727]: E1210 14:33:59.563198 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:59 crc kubenswrapper[4727]: E1210 14:33:59.563305 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:59 crc kubenswrapper[4727]: E1210 14:33:59.563412 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:59 crc kubenswrapper[4727]: E1210 14:33:59.563587 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:01 crc kubenswrapper[4727]: I1210 14:34:01.562837 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:01 crc kubenswrapper[4727]: I1210 14:34:01.562941 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:01 crc kubenswrapper[4727]: I1210 14:34:01.562849 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:01 crc kubenswrapper[4727]: E1210 14:34:01.563061 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:34:01 crc kubenswrapper[4727]: I1210 14:34:01.562837 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:01 crc kubenswrapper[4727]: E1210 14:34:01.563326 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:34:01 crc kubenswrapper[4727]: E1210 14:34:01.563352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:01 crc kubenswrapper[4727]: E1210 14:34:01.563405 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:34:02 crc kubenswrapper[4727]: E1210 14:34:02.542943 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:34:03 crc kubenswrapper[4727]: I1210 14:34:03.562977 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:03 crc kubenswrapper[4727]: I1210 14:34:03.562979 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:03 crc kubenswrapper[4727]: E1210 14:34:03.563155 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:34:03 crc kubenswrapper[4727]: I1210 14:34:03.563206 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:03 crc kubenswrapper[4727]: I1210 14:34:03.563295 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:03 crc kubenswrapper[4727]: E1210 14:34:03.563292 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:34:03 crc kubenswrapper[4727]: E1210 14:34:03.563399 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:34:03 crc kubenswrapper[4727]: E1210 14:34:03.563681 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:05 crc kubenswrapper[4727]: I1210 14:34:05.562386 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:05 crc kubenswrapper[4727]: I1210 14:34:05.562437 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:05 crc kubenswrapper[4727]: I1210 14:34:05.562509 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:05 crc kubenswrapper[4727]: I1210 14:34:05.562570 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:05 crc kubenswrapper[4727]: E1210 14:34:05.562680 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:34:05 crc kubenswrapper[4727]: E1210 14:34:05.562786 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:34:05 crc kubenswrapper[4727]: E1210 14:34:05.562948 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:34:05 crc kubenswrapper[4727]: E1210 14:34:05.563005 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:07 crc kubenswrapper[4727]: E1210 14:34:07.543721 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:34:07 crc kubenswrapper[4727]: I1210 14:34:07.562500 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:07 crc kubenswrapper[4727]: I1210 14:34:07.563154 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:07 crc kubenswrapper[4727]: E1210 14:34:07.563281 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:07 crc kubenswrapper[4727]: I1210 14:34:07.563164 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:07 crc kubenswrapper[4727]: I1210 14:34:07.563157 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:07 crc kubenswrapper[4727]: E1210 14:34:07.563450 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:34:07 crc kubenswrapper[4727]: I1210 14:34:07.563528 4727 scope.go:117] "RemoveContainer" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:34:07 crc kubenswrapper[4727]: E1210 14:34:07.563529 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:34:07 crc kubenswrapper[4727]: E1210 14:34:07.563956 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:34:07 crc kubenswrapper[4727]: I1210 14:34:07.731419 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/3.log" Dec 10 14:34:07 crc kubenswrapper[4727]: I1210 14:34:07.734416 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerStarted","Data":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} Dec 10 14:34:07 crc kubenswrapper[4727]: I1210 14:34:07.735126 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:34:07 crc kubenswrapper[4727]: I1210 14:34:07.774351 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podStartSLOduration=125.774330614 podStartE2EDuration="2m5.774330614s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:07.772855897 +0000 UTC m=+151.967630439" watchObservedRunningTime="2025-12-10 14:34:07.774330614 +0000 UTC m=+151.969105156" Dec 10 14:34:08 crc kubenswrapper[4727]: I1210 14:34:08.955673 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wwmwn"] Dec 10 14:34:08 crc kubenswrapper[4727]: I1210 14:34:08.955847 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:08 crc kubenswrapper[4727]: E1210 14:34:08.955967 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:09 crc kubenswrapper[4727]: I1210 14:34:09.562472 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:09 crc kubenswrapper[4727]: I1210 14:34:09.562564 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:09 crc kubenswrapper[4727]: E1210 14:34:09.562640 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:34:09 crc kubenswrapper[4727]: E1210 14:34:09.562724 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:34:09 crc kubenswrapper[4727]: I1210 14:34:09.563057 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:09 crc kubenswrapper[4727]: E1210 14:34:09.563146 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:34:09 crc kubenswrapper[4727]: I1210 14:34:09.563241 4727 scope.go:117] "RemoveContainer" containerID="029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f" Dec 10 14:34:10 crc kubenswrapper[4727]: I1210 14:34:10.562708 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:10 crc kubenswrapper[4727]: E1210 14:34:10.563473 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:10 crc kubenswrapper[4727]: I1210 14:34:10.752953 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6ph7v_c724a700-1960-4452-9106-d71685d1b38c/kube-multus/1.log" Dec 10 14:34:10 crc kubenswrapper[4727]: I1210 14:34:10.753023 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6ph7v" event={"ID":"c724a700-1960-4452-9106-d71685d1b38c","Type":"ContainerStarted","Data":"9852f93c90775d9bfb9a2a4dbd2105b8926e215a6ffbd85de1dd7b2ba100b90e"} Dec 10 14:34:11 crc kubenswrapper[4727]: I1210 14:34:11.562869 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:11 crc kubenswrapper[4727]: I1210 14:34:11.563072 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:11 crc kubenswrapper[4727]: I1210 14:34:11.562987 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:11 crc kubenswrapper[4727]: E1210 14:34:11.563235 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:34:11 crc kubenswrapper[4727]: E1210 14:34:11.563387 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:34:11 crc kubenswrapper[4727]: E1210 14:34:11.563660 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:34:12 crc kubenswrapper[4727]: E1210 14:34:12.546383 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:34:12 crc kubenswrapper[4727]: I1210 14:34:12.562507 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:12 crc kubenswrapper[4727]: E1210 14:34:12.562761 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:13 crc kubenswrapper[4727]: I1210 14:34:13.563146 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:13 crc kubenswrapper[4727]: I1210 14:34:13.563179 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:13 crc kubenswrapper[4727]: E1210 14:34:13.564031 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:34:13 crc kubenswrapper[4727]: E1210 14:34:13.564090 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:34:13 crc kubenswrapper[4727]: I1210 14:34:13.563265 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:13 crc kubenswrapper[4727]: E1210 14:34:13.564201 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:34:14 crc kubenswrapper[4727]: I1210 14:34:14.562042 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:14 crc kubenswrapper[4727]: E1210 14:34:14.562497 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:15 crc kubenswrapper[4727]: I1210 14:34:15.500686 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:15 crc kubenswrapper[4727]: I1210 14:34:15.500869 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:15 crc kubenswrapper[4727]: I1210 14:34:15.500944 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:15 crc kubenswrapper[4727]: I1210 14:34:15.500985 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:15 crc kubenswrapper[4727]: I1210 14:34:15.501044 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501118 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501205 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:36:17.50115298 +0000 UTC m=+281.695927522 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501243 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501273 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501276 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:36:17.501257963 +0000 UTC m=+281.696032505 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501269 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501297 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501419 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:36:17.501385066 +0000 UTC m=+281.696159778 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501456 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501477 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501495 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501457 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:36:17.501432837 +0000 UTC m=+281.696207369 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.501564 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:36:17.50155477 +0000 UTC m=+281.696329312 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:34:15 crc kubenswrapper[4727]: I1210 14:34:15.562316 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.562626 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:34:15 crc kubenswrapper[4727]: I1210 14:34:15.563115 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:15 crc kubenswrapper[4727]: I1210 14:34:15.563213 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.563461 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:34:15 crc kubenswrapper[4727]: E1210 14:34:15.563624 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:34:16 crc kubenswrapper[4727]: I1210 14:34:16.562135 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:16 crc kubenswrapper[4727]: E1210 14:34:16.563775 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wwmwn" podUID="2bcea03d-69bd-4530-91b9-ca3ba1ffc871" Dec 10 14:34:17 crc kubenswrapper[4727]: I1210 14:34:17.564656 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:17 crc kubenswrapper[4727]: I1210 14:34:17.565289 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:17 crc kubenswrapper[4727]: I1210 14:34:17.565323 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:17 crc kubenswrapper[4727]: I1210 14:34:17.568458 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 10 14:34:17 crc kubenswrapper[4727]: I1210 14:34:17.568478 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 10 14:34:17 crc kubenswrapper[4727]: I1210 14:34:17.569391 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 10 14:34:17 crc kubenswrapper[4727]: I1210 14:34:17.569537 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 10 14:34:18 crc kubenswrapper[4727]: I1210 14:34:18.562620 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:18 crc kubenswrapper[4727]: I1210 14:34:18.565643 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 10 14:34:18 crc kubenswrapper[4727]: I1210 14:34:18.567018 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.792186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.834166 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kljsv"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.835602 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.846241 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.846734 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.847005 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.847292 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.850377 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.850626 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.851100 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.851653 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.852192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.852884 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-455f6"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.853231 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.853474 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.854200 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cf2gn"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.855200 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.855307 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.856535 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.856552 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sjrxq"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.857675 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.869369 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.870109 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.870564 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-swj5s"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.870652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.870697 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-service-ca-bundle\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.870853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-config\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.871009 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.871000 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22z7b\" (UniqueName: \"kubernetes.io/projected/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-kube-api-access-22z7b\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.871153 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-serving-cert\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.871208 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.871497 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.880288 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.880308 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.880651 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.880762 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.880882 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.880984 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.881199 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.881323 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.881433 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.881439 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.881566 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.881688 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.881757 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.881875 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.882031 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.884617 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.884753 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.888043 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.888283 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.888470 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.888702 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.888972 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.889279 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.889685 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.889718 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.889964 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.890142 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.890298 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.890518 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.890691 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.890850 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.890945 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.891284 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.891532 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.891666 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.891715 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.891833 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.891666 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.891833 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.892469 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.892949 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.893504 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.893810 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.894084 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.894209 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.894294 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.894515 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.894689 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.894895 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.895033 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.895048 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.897826 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nv97s"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.912983 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.913197 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.914057 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tzznn"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.914406 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.914834 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.916300 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.917294 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.917842 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n87wt"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.918177 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.918999 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.920668 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.921049 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.927529 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.931530 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.940772 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.941175 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.950599 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-m2s8b"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.962218 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.962958 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.962986 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963078 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963354 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963427 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963497 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963560 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963684 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963719 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963735 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963723 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963841 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.963943 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.964047 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.964171 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.965539 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mnprj"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.966516 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.967204 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mnprj" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.969719 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.970274 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.970509 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.971069 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.971776 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.971832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22z7b\" (UniqueName: \"kubernetes.io/projected/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-kube-api-access-22z7b\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.971864 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-serving-cert\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.971894 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-config\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.971934 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-audit\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.971964 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhv9c\" (UniqueName: \"kubernetes.io/projected/f90473e8-86a2-4a1c-aadf-31d286ed0f21-kube-api-access-jhv9c\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.971988 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972035 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-etcd-serving-ca\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972059 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mp4p\" (UniqueName: \"kubernetes.io/projected/f1f25eda-d2d3-4eb9-9d05-24cc0293fa37-kube-api-access-9mp4p\") pod \"openshift-config-operator-7777fb866f-2qk8s\" (UID: \"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972083 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89ccee90-bde1-4102-a1a6-08d2b5d80aac-audit-dir\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972116 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-client-ca\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972143 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972172 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-serving-cert\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972204 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972227 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-trusted-ca-bundle\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972253 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972275 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-serving-cert\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972302 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzh5\" (UniqueName: \"kubernetes.io/projected/7ff0e752-75eb-4639-a821-ccbaf0e2da51-kube-api-access-kkzh5\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972332 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-image-import-ca\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ff0e752-75eb-4639-a821-ccbaf0e2da51-auth-proxy-config\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972393 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-service-ca\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972422 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ccee90-bde1-4102-a1a6-08d2b5d80aac-serving-cert\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972456 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d586de-34e2-49e9-b775-2484e8efffa5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-f5t57\" (UID: \"67d586de-34e2-49e9-b775-2484e8efffa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972481 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-642jv\" (UniqueName: \"kubernetes.io/projected/67d586de-34e2-49e9-b775-2484e8efffa5-kube-api-access-642jv\") pod \"openshift-apiserver-operator-796bbdcf4f-f5t57\" (UID: \"67d586de-34e2-49e9-b775-2484e8efffa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972506 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-audit-policies\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972531 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d6ceeb8-a826-4a2b-99d1-19d071983122-trusted-ca\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972569 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972623 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-service-ca-bundle\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972648 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972679 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-config\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972729 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-serving-cert\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972779 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bx2\" (UniqueName: \"kubernetes.io/projected/9d6ceeb8-a826-4a2b-99d1-19d071983122-kube-api-access-l6bx2\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972826 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k27rf\" (UniqueName: \"kubernetes.io/projected/aa4939cc-34b3-4562-9798-92d443fb76ca-kube-api-access-k27rf\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972854 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972881 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgrsp\" (UniqueName: \"kubernetes.io/projected/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-kube-api-access-qgrsp\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972920 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlkh6\" (UniqueName: \"kubernetes.io/projected/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-kube-api-access-rlkh6\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972947 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-policies\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.972976 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973000 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973026 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-config\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973049 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973075 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67d586de-34e2-49e9-b775-2484e8efffa5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-f5t57\" (UID: \"67d586de-34e2-49e9-b775-2484e8efffa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973099 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f90473e8-86a2-4a1c-aadf-31d286ed0f21-serving-cert\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973122 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89ccee90-bde1-4102-a1a6-08d2b5d80aac-etcd-client\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973145 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff0e752-75eb-4639-a821-ccbaf0e2da51-config\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973173 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89ccee90-bde1-4102-a1a6-08d2b5d80aac-encryption-config\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d6ceeb8-a826-4a2b-99d1-19d071983122-config\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973221 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973248 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-images\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973273 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmjl2\" (UniqueName: \"kubernetes.io/projected/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-kube-api-access-lmjl2\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973300 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-encryption-config\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973320 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnqx\" (UniqueName: \"kubernetes.io/projected/89ccee90-bde1-4102-a1a6-08d2b5d80aac-kube-api-access-kpnqx\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973381 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f25eda-d2d3-4eb9-9d05-24cc0293fa37-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qk8s\" (UID: \"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973437 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/198563d9-9967-47b7-aa02-c2b5be2d7c4b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nfknc\" (UID: \"198563d9-9967-47b7-aa02-c2b5be2d7c4b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973485 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-etcd-client\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973508 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d6ceeb8-a826-4a2b-99d1-19d071983122-serving-cert\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973526 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973737 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973862 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973993 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.974103 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.974224 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.974448 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.974626 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.974802 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.973528 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f1f25eda-d2d3-4eb9-9d05-24cc0293fa37-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qk8s\" (UID: \"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.974897 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89ccee90-bde1-4102-a1a6-08d2b5d80aac-node-pullsecrets\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.974927 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.974948 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.974995 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-audit-dir\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975029 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-dir\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-client-ca\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975090 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7ff0e752-75eb-4639-a821-ccbaf0e2da51-machine-approver-tls\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975112 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-oauth-config\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975132 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-oauth-serving-cert\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4s2z\" (UniqueName: \"kubernetes.io/projected/6d8cde10-5565-4980-a4e2-a30f26707a0e-kube-api-access-t4s2z\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-config\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975246 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-config\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975272 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp2dq\" (UniqueName: \"kubernetes.io/projected/198563d9-9967-47b7-aa02-c2b5be2d7c4b-kube-api-access-gp2dq\") pod \"cluster-samples-operator-665b6dd947-nfknc\" (UID: \"198563d9-9967-47b7-aa02-c2b5be2d7c4b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.975295 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-config\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.976441 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.976488 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.976784 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.976800 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bs68z"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.976805 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-config\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.977570 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.978781 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-service-ca-bundle\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.979040 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.989536 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-serving-cert\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.989966 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.991783 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dgbsr"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.992634 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.992720 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.993799 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.995379 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.997133 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.997743 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d"] Dec 10 14:34:20 crc kubenswrapper[4727]: I1210 14:34:20.998286 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:20.999110 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:20.999113 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6w2jq"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.004795 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.001362 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.012855 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.013099 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.014728 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rvngm"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.017042 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.022083 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.022575 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.023621 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.024142 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kljsv"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.024342 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.024496 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.033179 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.033941 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.034649 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.034669 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.035205 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.037623 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.037650 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.037625 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.038764 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.038898 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.039434 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.042122 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.043097 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.043607 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.043810 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.058108 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.061617 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.062212 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.062744 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.063931 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.064025 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.064431 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.066380 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.066798 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.068046 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-psftx"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.068601 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.069513 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-455f6"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.070693 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mh2pb"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.071144 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.075297 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.075984 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-serving-cert\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076013 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-trusted-ca-bundle\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076055 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076076 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-serving-cert\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076093 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzh5\" (UniqueName: \"kubernetes.io/projected/7ff0e752-75eb-4639-a821-ccbaf0e2da51-kube-api-access-kkzh5\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076109 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-image-import-ca\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076130 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ff0e752-75eb-4639-a821-ccbaf0e2da51-auth-proxy-config\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076154 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-service-ca\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076169 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ccee90-bde1-4102-a1a6-08d2b5d80aac-serving-cert\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076187 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d586de-34e2-49e9-b775-2484e8efffa5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-f5t57\" (UID: \"67d586de-34e2-49e9-b775-2484e8efffa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076201 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-642jv\" (UniqueName: \"kubernetes.io/projected/67d586de-34e2-49e9-b775-2484e8efffa5-kube-api-access-642jv\") pod \"openshift-apiserver-operator-796bbdcf4f-f5t57\" (UID: \"67d586de-34e2-49e9-b775-2484e8efffa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076216 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-audit-policies\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076231 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d6ceeb8-a826-4a2b-99d1-19d071983122-trusted-ca\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076256 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076295 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-config\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076314 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076331 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-serving-cert\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076345 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6bx2\" (UniqueName: \"kubernetes.io/projected/9d6ceeb8-a826-4a2b-99d1-19d071983122-kube-api-access-l6bx2\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076367 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k27rf\" (UniqueName: \"kubernetes.io/projected/aa4939cc-34b3-4562-9798-92d443fb76ca-kube-api-access-k27rf\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076384 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgrsp\" (UniqueName: \"kubernetes.io/projected/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-kube-api-access-qgrsp\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076416 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlkh6\" (UniqueName: \"kubernetes.io/projected/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-kube-api-access-rlkh6\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076431 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-policies\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076448 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076483 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67d586de-34e2-49e9-b775-2484e8efffa5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-f5t57\" (UID: \"67d586de-34e2-49e9-b775-2484e8efffa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f90473e8-86a2-4a1c-aadf-31d286ed0f21-serving-cert\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076530 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89ccee90-bde1-4102-a1a6-08d2b5d80aac-etcd-client\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076546 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff0e752-75eb-4639-a821-ccbaf0e2da51-config\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89ccee90-bde1-4102-a1a6-08d2b5d80aac-encryption-config\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076582 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d6ceeb8-a826-4a2b-99d1-19d071983122-config\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076596 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-images\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076614 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmjl2\" (UniqueName: \"kubernetes.io/projected/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-kube-api-access-lmjl2\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076630 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-encryption-config\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076646 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpnqx\" (UniqueName: \"kubernetes.io/projected/89ccee90-bde1-4102-a1a6-08d2b5d80aac-kube-api-access-kpnqx\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f25eda-d2d3-4eb9-9d05-24cc0293fa37-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qk8s\" (UID: \"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/198563d9-9967-47b7-aa02-c2b5be2d7c4b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nfknc\" (UID: \"198563d9-9967-47b7-aa02-c2b5be2d7c4b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076714 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076731 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-etcd-client\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d6ceeb8-a826-4a2b-99d1-19d071983122-serving-cert\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076763 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f1f25eda-d2d3-4eb9-9d05-24cc0293fa37-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qk8s\" (UID: \"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076779 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89ccee90-bde1-4102-a1a6-08d2b5d80aac-node-pullsecrets\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076795 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076810 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-audit-dir\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076825 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-dir\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076842 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-client-ca\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076857 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7ff0e752-75eb-4639-a821-ccbaf0e2da51-machine-approver-tls\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-oauth-config\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076893 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-oauth-serving-cert\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076929 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4s2z\" (UniqueName: \"kubernetes.io/projected/6d8cde10-5565-4980-a4e2-a30f26707a0e-kube-api-access-t4s2z\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076947 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-config\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076965 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-config\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076982 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp2dq\" (UniqueName: \"kubernetes.io/projected/198563d9-9967-47b7-aa02-c2b5be2d7c4b-kube-api-access-gp2dq\") pod \"cluster-samples-operator-665b6dd947-nfknc\" (UID: \"198563d9-9967-47b7-aa02-c2b5be2d7c4b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.076999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-config\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077018 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077050 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-config\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077065 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-audit\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077081 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhv9c\" (UniqueName: \"kubernetes.io/projected/f90473e8-86a2-4a1c-aadf-31d286ed0f21-kube-api-access-jhv9c\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077097 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077130 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-etcd-serving-ca\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077155 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mp4p\" (UniqueName: \"kubernetes.io/projected/f1f25eda-d2d3-4eb9-9d05-24cc0293fa37-kube-api-access-9mp4p\") pod \"openshift-config-operator-7777fb866f-2qk8s\" (UID: \"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077170 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89ccee90-bde1-4102-a1a6-08d2b5d80aac-audit-dir\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077185 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-client-ca\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077201 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.077714 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.078482 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-r4rds"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.078644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-audit-dir\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.079257 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.079578 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.080093 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sjrxq"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.080158 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.080427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.081407 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-audit-policies\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.083681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f1f25eda-d2d3-4eb9-9d05-24cc0293fa37-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qk8s\" (UID: \"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.083842 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-dir\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.083941 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89ccee90-bde1-4102-a1a6-08d2b5d80aac-node-pullsecrets\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.084425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.084887 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.085202 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.086771 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.087084 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d6ceeb8-a826-4a2b-99d1-19d071983122-trusted-ca\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.087196 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.087465 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-trusted-ca-bundle\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.087925 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-serving-cert\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.088071 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-config\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.088195 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bs68z"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.088246 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.088262 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-m2s8b"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.088775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f90473e8-86a2-4a1c-aadf-31d286ed0f21-serving-cert\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.089244 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff0e752-75eb-4639-a821-ccbaf0e2da51-config\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.089693 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-policies\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.089768 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-images\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.090334 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.090461 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d6ceeb8-a826-4a2b-99d1-19d071983122-config\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.090514 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-config\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.090873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-service-ca\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.091228 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/198563d9-9967-47b7-aa02-c2b5be2d7c4b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nfknc\" (UID: \"198563d9-9967-47b7-aa02-c2b5be2d7c4b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.091355 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-oauth-serving-cert\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.091402 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-audit\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.092141 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-config\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.092175 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-image-import-ca\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.092493 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-etcd-client\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.092616 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-config\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.093915 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ff0e752-75eb-4639-a821-ccbaf0e2da51-auth-proxy-config\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.094067 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-client-ca\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.094066 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-config\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.094286 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.094436 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89ccee90-bde1-4102-a1a6-08d2b5d80aac-audit-dir\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.094768 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d586de-34e2-49e9-b775-2484e8efffa5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-f5t57\" (UID: \"67d586de-34e2-49e9-b775-2484e8efffa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.094770 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89ccee90-bde1-4102-a1a6-08d2b5d80aac-etcd-client\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.095228 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n87wt"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.095809 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.095857 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89ccee90-bde1-4102-a1a6-08d2b5d80aac-encryption-config\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.095965 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-client-ca\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.096072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.096181 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-serving-cert\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.096300 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67d586de-34e2-49e9-b775-2484e8efffa5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-f5t57\" (UID: \"67d586de-34e2-49e9-b775-2484e8efffa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.096366 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-serving-cert\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.096688 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-oauth-config\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.096940 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.096989 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d6ceeb8-a826-4a2b-99d1-19d071983122-serving-cert\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.097129 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.097386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89ccee90-bde1-4102-a1a6-08d2b5d80aac-etcd-serving-ca\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.098010 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.098586 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7ff0e752-75eb-4639-a821-ccbaf0e2da51-machine-approver-tls\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.098799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f25eda-d2d3-4eb9-9d05-24cc0293fa37-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qk8s\" (UID: \"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.099537 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.100106 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.100351 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-encryption-config\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.100764 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ccee90-bde1-4102-a1a6-08d2b5d80aac-serving-cert\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.105602 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cf2gn"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.105703 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.105772 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-swj5s"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.118156 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tzznn"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.120335 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.121808 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nv97s"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.123142 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.125033 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.126237 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.127507 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.128585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.129935 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.131389 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.132655 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.134337 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.135732 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.137885 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.138556 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.139860 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hbkh7"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.140872 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.141496 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mnprj"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.142553 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6w2jq"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.143772 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9nkp9"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.144409 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9nkp9" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.144965 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.146086 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rvngm"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.147471 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.149147 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.150463 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.151540 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mh2pb"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.153050 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9nkp9"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.154593 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-psftx"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.156354 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.158002 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r4rds"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.159584 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-89q8l"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.161148 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.161796 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-89q8l"] Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.179088 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.179482 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22z7b\" (UniqueName: \"kubernetes.io/projected/7bda144f-7fce-4ff5-9b8d-1363a5d14fc8-kube-api-access-22z7b\") pod \"authentication-operator-69f744f599-kljsv\" (UID: \"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.198934 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.219373 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.238777 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.258440 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.279380 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.298354 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.319474 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.339574 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.358270 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.377603 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.398789 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.419203 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.438936 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.459053 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.459453 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.482708 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.499238 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.518300 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.539229 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.558949 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.579514 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.599228 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.619198 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.640290 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.658954 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.680015 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.699440 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.718705 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.738844 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.744484 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kljsv"] Dec 10 14:34:21 crc kubenswrapper[4727]: W1210 14:34:21.756583 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bda144f_7fce_4ff5_9b8d_1363a5d14fc8.slice/crio-e268978371d89539700f3ab89431aa52c673c2cb66f053ba47cbfbfdd44f33dd WatchSource:0}: Error finding container e268978371d89539700f3ab89431aa52c673c2cb66f053ba47cbfbfdd44f33dd: Status 404 returned error can't find the container with id e268978371d89539700f3ab89431aa52c673c2cb66f053ba47cbfbfdd44f33dd Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.759109 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.786253 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.800050 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" event={"ID":"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8","Type":"ContainerStarted","Data":"e268978371d89539700f3ab89431aa52c673c2cb66f053ba47cbfbfdd44f33dd"} Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.800238 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.819604 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.839743 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.858094 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.878953 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.899400 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.918513 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.938501 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.959304 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 10 14:34:21 crc kubenswrapper[4727]: I1210 14:34:21.978845 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.018720 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.036782 4727 request.go:700] Waited for 1.001309021s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.039700 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.059460 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.079680 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.098801 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.119783 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.139344 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.158618 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.178704 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.198612 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.218873 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.238581 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.259532 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.279871 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.298738 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.319050 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.339042 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.358659 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.378739 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.399236 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.428485 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.438533 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.448005 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7fpm\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-kube-api-access-q7fpm\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.448077 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-tls\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.448175 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-certificates\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.448236 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.448259 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.448663 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.448714 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-trusted-ca\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.448755 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-bound-sa-token\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: E1210 14:34:22.450125 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:22.950085787 +0000 UTC m=+167.144860339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.459061 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.479322 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.500095 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.518219 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.539390 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.549447 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:22 crc kubenswrapper[4727]: E1210 14:34:22.549563 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.049536489 +0000 UTC m=+167.244311031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.550403 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ksnn\" (UniqueName: \"kubernetes.io/projected/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-kube-api-access-7ksnn\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.550465 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fab99632-3a2a-40db-a351-7272d13aaa82-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.550488 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/229d2695-3886-468e-80c7-69660da8e109-config-volume\") pod \"dns-default-r4rds\" (UID: \"229d2695-3886-468e-80c7-69660da8e109\") " pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.550615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5e9adb2f-2d9c-461a-b86c-07e2325dade7-images\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.550667 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-trusted-ca\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.550691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f805118b-6de6-41c9-92c3-35acc76c5c9a-metrics-tls\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.550793 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjlcx\" (UniqueName: \"kubernetes.io/projected/de3fc3e9-3742-497d-a3f3-73380ce16e70-kube-api-access-cjlcx\") pod \"ingress-canary-9nkp9\" (UID: \"de3fc3e9-3742-497d-a3f3-73380ce16e70\") " pod="openshift-ingress-canary/ingress-canary-9nkp9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.551067 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-bound-sa-token\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.551175 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fab99632-3a2a-40db-a351-7272d13aaa82-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.551557 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-etcd-service-ca\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.551601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-serving-cert\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.551640 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4m46\" (UniqueName: \"kubernetes.io/projected/78077a76-500c-439d-8a34-240d3af79fed-kube-api-access-w4m46\") pod \"kube-storage-version-migrator-operator-b67b599dd-gdctx\" (UID: \"78077a76-500c-439d-8a34-240d3af79fed\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.551899 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ww2q\" (UniqueName: \"kubernetes.io/projected/e7210fed-0ce4-4a6e-98f0-3614700865e3-kube-api-access-5ww2q\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p2dc\" (UID: \"e7210fed-0ce4-4a6e-98f0-3614700865e3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.551937 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6d5ab9-4caf-418a-85e9-dc76c5b3c138-config\") pod \"service-ca-operator-777779d784-rvngm\" (UID: \"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.552033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-trusted-ca\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.552108 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/798ea935-5c9b-4f13-9ed2-fa8cb0088895-metrics-tls\") pod \"dns-operator-744455d44c-m2s8b\" (UID: \"798ea935-5c9b-4f13-9ed2-fa8cb0088895\") " pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.552262 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6d5ab9-4caf-418a-85e9-dc76c5b3c138-serving-cert\") pod \"service-ca-operator-777779d784-rvngm\" (UID: \"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.552371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f805118b-6de6-41c9-92c3-35acc76c5c9a-trusted-ca\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.552513 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7fpm\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-kube-api-access-q7fpm\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.552597 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/229d2695-3886-468e-80c7-69660da8e109-metrics-tls\") pod \"dns-default-r4rds\" (UID: \"229d2695-3886-468e-80c7-69660da8e109\") " pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.552814 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78077a76-500c-439d-8a34-240d3af79fed-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gdctx\" (UID: \"78077a76-500c-439d-8a34-240d3af79fed\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.552846 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-tls\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.552992 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553100 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de3fc3e9-3742-497d-a3f3-73380ce16e70-cert\") pod \"ingress-canary-9nkp9\" (UID: \"de3fc3e9-3742-497d-a3f3-73380ce16e70\") " pod="openshift-ingress-canary/ingress-canary-9nkp9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4v8n\" (UniqueName: \"kubernetes.io/projected/6c7b70a8-7b74-4562-bc8b-bd5be42a8222-kube-api-access-f4v8n\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4p8m\" (UID: \"6c7b70a8-7b74-4562-bc8b-bd5be42a8222\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553233 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fab99632-3a2a-40db-a351-7272d13aaa82-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553380 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e9adb2f-2d9c-461a-b86c-07e2325dade7-proxy-tls\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553418 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f805118b-6de6-41c9-92c3-35acc76c5c9a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553443 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-etcd-ca\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553627 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-config\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553672 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e9adb2f-2d9c-461a-b86c-07e2325dade7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553703 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z978f\" (UniqueName: \"kubernetes.io/projected/229d2695-3886-468e-80c7-69660da8e109-kube-api-access-z978f\") pod \"dns-default-r4rds\" (UID: \"229d2695-3886-468e-80c7-69660da8e109\") " pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.553728 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c7b70a8-7b74-4562-bc8b-bd5be42a8222-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4p8m\" (UID: \"6c7b70a8-7b74-4562-bc8b-bd5be42a8222\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.556039 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.556354 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.556476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd6pl\" (UniqueName: \"kubernetes.io/projected/fab99632-3a2a-40db-a351-7272d13aaa82-kube-api-access-sd6pl\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: E1210 14:34:22.556843 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.056801959 +0000 UTC m=+167.251576701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.557385 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-etcd-client\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.560631 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.560825 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78077a76-500c-439d-8a34-240d3af79fed-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gdctx\" (UID: \"78077a76-500c-439d-8a34-240d3af79fed\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.560965 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7210fed-0ce4-4a6e-98f0-3614700865e3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p2dc\" (UID: \"e7210fed-0ce4-4a6e-98f0-3614700865e3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.561057 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-certificates\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.561420 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899c2\" (UniqueName: \"kubernetes.io/projected/6a6d5ab9-4caf-418a-85e9-dc76c5b3c138-kube-api-access-899c2\") pod \"service-ca-operator-777779d784-rvngm\" (UID: \"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.561675 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkjn\" (UniqueName: \"kubernetes.io/projected/5e9adb2f-2d9c-461a-b86c-07e2325dade7-kube-api-access-xdkjn\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.561740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7210fed-0ce4-4a6e-98f0-3614700865e3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p2dc\" (UID: \"e7210fed-0ce4-4a6e-98f0-3614700865e3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.561766 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnm62\" (UniqueName: \"kubernetes.io/projected/798ea935-5c9b-4f13-9ed2-fa8cb0088895-kube-api-access-rnm62\") pod \"dns-operator-744455d44c-m2s8b\" (UID: \"798ea935-5c9b-4f13-9ed2-fa8cb0088895\") " pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.561788 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzv4m\" (UniqueName: \"kubernetes.io/projected/f805118b-6de6-41c9-92c3-35acc76c5c9a-kube-api-access-hzv4m\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.563109 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-certificates\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.564269 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.567874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-tls\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.579801 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.599409 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.620152 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.640285 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.658495 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.663758 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:22 crc kubenswrapper[4727]: E1210 14:34:22.664029 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.163986443 +0000 UTC m=+167.358760995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664102 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4v8n\" (UniqueName: \"kubernetes.io/projected/6c7b70a8-7b74-4562-bc8b-bd5be42a8222-kube-api-access-f4v8n\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4p8m\" (UID: \"6c7b70a8-7b74-4562-bc8b-bd5be42a8222\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664153 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23f17c89-e4df-4154-962e-11e7ca7aa4f2-signing-key\") pod \"service-ca-9c57cc56f-mh2pb\" (UID: \"23f17c89-e4df-4154-962e-11e7ca7aa4f2\") " pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664192 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fab99632-3a2a-40db-a351-7272d13aaa82-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664217 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d780f1ac-7f3c-4598-8234-825e81e4b9d1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4lp8p\" (UID: \"d780f1ac-7f3c-4598-8234-825e81e4b9d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664245 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtdq\" (UniqueName: \"kubernetes.io/projected/dc615bdc-da08-4680-afa5-d500f597d18b-kube-api-access-2mtdq\") pod \"collect-profiles-29422950-2hhnf\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73d3b8bc-b0de-43d7-b0a0-d7a298706c8c-node-bootstrap-token\") pod \"machine-config-server-hbkh7\" (UID: \"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c\") " pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664286 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vvl\" (UniqueName: \"kubernetes.io/projected/3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0-kube-api-access-b7vvl\") pod \"multus-admission-controller-857f4d67dd-psftx\" (UID: \"3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664328 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e9adb2f-2d9c-461a-b86c-07e2325dade7-proxy-tls\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f805118b-6de6-41c9-92c3-35acc76c5c9a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664376 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-etcd-ca\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664420 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hhl4\" (UniqueName: \"kubernetes.io/projected/73d3b8bc-b0de-43d7-b0a0-d7a298706c8c-kube-api-access-4hhl4\") pod \"machine-config-server-hbkh7\" (UID: \"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c\") " pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664467 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b726080-2edc-4396-ad5a-836fb6d99418-config\") pod \"kube-apiserver-operator-766d6c64bb-n2b9d\" (UID: \"3b726080-2edc-4396-ad5a-836fb6d99418\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664693 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a467255-5402-414b-9d42-0621d826aede-proxy-tls\") pod \"machine-config-controller-84d6567774-xd42k\" (UID: \"8a467255-5402-414b-9d42-0621d826aede\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.664796 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-config\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e9adb2f-2d9c-461a-b86c-07e2325dade7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665049 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a467255-5402-414b-9d42-0621d826aede-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xd42k\" (UID: \"8a467255-5402-414b-9d42-0621d826aede\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665082 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8404ee16-01ed-4b08-991a-b73115123629-config\") pod \"kube-controller-manager-operator-78b949d7b-zm4xx\" (UID: \"8404ee16-01ed-4b08-991a-b73115123629\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665121 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z978f\" (UniqueName: \"kubernetes.io/projected/229d2695-3886-468e-80c7-69660da8e109-kube-api-access-z978f\") pod \"dns-default-r4rds\" (UID: \"229d2695-3886-468e-80c7-69660da8e109\") " pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665160 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c7b70a8-7b74-4562-bc8b-bd5be42a8222-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4p8m\" (UID: \"6c7b70a8-7b74-4562-bc8b-bd5be42a8222\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665207 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd6pl\" (UniqueName: \"kubernetes.io/projected/fab99632-3a2a-40db-a351-7272d13aaa82-kube-api-access-sd6pl\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/174ca2d4-702a-48fe-83d9-a9bfc1353c78-profile-collector-cert\") pod \"catalog-operator-68c6474976-5qz7j\" (UID: \"174ca2d4-702a-48fe-83d9-a9bfc1353c78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-etcd-client\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665468 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-etcd-ca\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665605 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6w2jq\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665665 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-default-certificate\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: E1210 14:34:22.665708 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.165688335 +0000 UTC m=+167.360462877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665715 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e9adb2f-2d9c-461a-b86c-07e2325dade7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665743 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-socket-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665780 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxsl\" (UniqueName: \"kubernetes.io/projected/8a467255-5402-414b-9d42-0621d826aede-kube-api-access-wxxsl\") pod \"machine-config-controller-84d6567774-xd42k\" (UID: \"8a467255-5402-414b-9d42-0621d826aede\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665837 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b726080-2edc-4396-ad5a-836fb6d99418-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n2b9d\" (UID: \"3b726080-2edc-4396-ad5a-836fb6d99418\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665875 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8404ee16-01ed-4b08-991a-b73115123629-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zm4xx\" (UID: \"8404ee16-01ed-4b08-991a-b73115123629\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78077a76-500c-439d-8a34-240d3af79fed-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gdctx\" (UID: \"78077a76-500c-439d-8a34-240d3af79fed\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665965 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-psftx\" (UID: \"3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.665993 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852f7\" (UniqueName: \"kubernetes.io/projected/d14ab0aa-f244-4668-911a-3a54806f024f-kube-api-access-852f7\") pod \"downloads-7954f5f757-mnprj\" (UID: \"d14ab0aa-f244-4668-911a-3a54806f024f\") " pod="openshift-console/downloads-7954f5f757-mnprj" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666078 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7210fed-0ce4-4a6e-98f0-3614700865e3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p2dc\" (UID: \"e7210fed-0ce4-4a6e-98f0-3614700865e3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666121 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-plugins-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666158 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2t8p\" (UniqueName: \"kubernetes.io/projected/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-kube-api-access-r2t8p\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666211 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf3dfb44-c616-4b66-ac57-f3e4ecef9afc-profile-collector-cert\") pod \"olm-operator-6b444d44fb-97szk\" (UID: \"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666237 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89063793-7aa2-4220-b653-cb480a93797f-webhook-cert\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666276 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b837b34-03b2-4bcc-827c-fc8046263718-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-srt7t\" (UID: \"5b837b34-03b2-4bcc-827c-fc8046263718\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666318 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-service-ca-bundle\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666344 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69hl4\" (UniqueName: \"kubernetes.io/projected/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-kube-api-access-69hl4\") pod \"marketplace-operator-79b997595-6w2jq\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666393 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-899c2\" (UniqueName: \"kubernetes.io/projected/6a6d5ab9-4caf-418a-85e9-dc76c5b3c138-kube-api-access-899c2\") pod \"service-ca-operator-777779d784-rvngm\" (UID: \"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666428 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdkjn\" (UniqueName: \"kubernetes.io/projected/5e9adb2f-2d9c-461a-b86c-07e2325dade7-kube-api-access-xdkjn\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666466 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7210fed-0ce4-4a6e-98f0-3614700865e3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p2dc\" (UID: \"e7210fed-0ce4-4a6e-98f0-3614700865e3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666519 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnm62\" (UniqueName: \"kubernetes.io/projected/798ea935-5c9b-4f13-9ed2-fa8cb0088895-kube-api-access-rnm62\") pod \"dns-operator-744455d44c-m2s8b\" (UID: \"798ea935-5c9b-4f13-9ed2-fa8cb0088895\") " pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666546 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzv4m\" (UniqueName: \"kubernetes.io/projected/f805118b-6de6-41c9-92c3-35acc76c5c9a-kube-api-access-hzv4m\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666641 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ksnn\" (UniqueName: \"kubernetes.io/projected/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-kube-api-access-7ksnn\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666669 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/174ca2d4-702a-48fe-83d9-a9bfc1353c78-srv-cert\") pod \"catalog-operator-68c6474976-5qz7j\" (UID: \"174ca2d4-702a-48fe-83d9-a9bfc1353c78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666694 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d780f1ac-7f3c-4598-8234-825e81e4b9d1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4lp8p\" (UID: \"d780f1ac-7f3c-4598-8234-825e81e4b9d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666724 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-registration-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666750 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lqtc\" (UniqueName: \"kubernetes.io/projected/cb1bb43a-d94a-47fb-b22d-060818c735aa-kube-api-access-7lqtc\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdr5w\" (UniqueName: \"kubernetes.io/projected/23f17c89-e4df-4154-962e-11e7ca7aa4f2-kube-api-access-hdr5w\") pod \"service-ca-9c57cc56f-mh2pb\" (UID: \"23f17c89-e4df-4154-962e-11e7ca7aa4f2\") " pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fab99632-3a2a-40db-a351-7272d13aaa82-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666856 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/229d2695-3886-468e-80c7-69660da8e109-config-volume\") pod \"dns-default-r4rds\" (UID: \"229d2695-3886-468e-80c7-69660da8e109\") " pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf3dfb44-c616-4b66-ac57-f3e4ecef9afc-srv-cert\") pod \"olm-operator-6b444d44fb-97szk\" (UID: \"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666925 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8404ee16-01ed-4b08-991a-b73115123629-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zm4xx\" (UID: \"8404ee16-01ed-4b08-991a-b73115123629\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666966 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5e9adb2f-2d9c-461a-b86c-07e2325dade7-images\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666988 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d780f1ac-7f3c-4598-8234-825e81e4b9d1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4lp8p\" (UID: \"d780f1ac-7f3c-4598-8234-825e81e4b9d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667015 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89063793-7aa2-4220-b653-cb480a93797f-apiservice-cert\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667040 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f805118b-6de6-41c9-92c3-35acc76c5c9a-metrics-tls\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-csi-data-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667088 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjlcx\" (UniqueName: \"kubernetes.io/projected/de3fc3e9-3742-497d-a3f3-73380ce16e70-kube-api-access-cjlcx\") pod \"ingress-canary-9nkp9\" (UID: \"de3fc3e9-3742-497d-a3f3-73380ce16e70\") " pod="openshift-ingress-canary/ingress-canary-9nkp9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fab99632-3a2a-40db-a351-7272d13aaa82-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667148 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-metrics-certs\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667191 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23f17c89-e4df-4154-962e-11e7ca7aa4f2-signing-cabundle\") pod \"service-ca-9c57cc56f-mh2pb\" (UID: \"23f17c89-e4df-4154-962e-11e7ca7aa4f2\") " pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667239 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-serving-cert\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-etcd-service-ca\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667287 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc615bdc-da08-4680-afa5-d500f597d18b-config-volume\") pod \"collect-profiles-29422950-2hhnf\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667314 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ww2q\" (UniqueName: \"kubernetes.io/projected/e7210fed-0ce4-4a6e-98f0-3614700865e3-kube-api-access-5ww2q\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p2dc\" (UID: \"e7210fed-0ce4-4a6e-98f0-3614700865e3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.667338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6d5ab9-4caf-418a-85e9-dc76c5b3c138-config\") pod \"service-ca-operator-777779d784-rvngm\" (UID: \"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.666720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78077a76-500c-439d-8a34-240d3af79fed-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gdctx\" (UID: \"78077a76-500c-439d-8a34-240d3af79fed\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.668376 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5e9adb2f-2d9c-461a-b86c-07e2325dade7-images\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669183 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fab99632-3a2a-40db-a351-7272d13aaa82-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669450 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4m46\" (UniqueName: \"kubernetes.io/projected/78077a76-500c-439d-8a34-240d3af79fed-kube-api-access-w4m46\") pod \"kube-storage-version-migrator-operator-b67b599dd-gdctx\" (UID: \"78077a76-500c-439d-8a34-240d3af79fed\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669494 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/89063793-7aa2-4220-b653-cb480a93797f-tmpfs\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669530 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/798ea935-5c9b-4f13-9ed2-fa8cb0088895-metrics-tls\") pod \"dns-operator-744455d44c-m2s8b\" (UID: \"798ea935-5c9b-4f13-9ed2-fa8cb0088895\") " pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669558 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6w4d\" (UniqueName: \"kubernetes.io/projected/174ca2d4-702a-48fe-83d9-a9bfc1353c78-kube-api-access-w6w4d\") pod \"catalog-operator-68c6474976-5qz7j\" (UID: \"174ca2d4-702a-48fe-83d9-a9bfc1353c78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669675 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc615bdc-da08-4680-afa5-d500f597d18b-secret-volume\") pod \"collect-profiles-29422950-2hhnf\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669710 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-mountpoint-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669739 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtkqm\" (UniqueName: \"kubernetes.io/projected/89063793-7aa2-4220-b653-cb480a93797f-kube-api-access-mtkqm\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669772 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73d3b8bc-b0de-43d7-b0a0-d7a298706c8c-certs\") pod \"machine-config-server-hbkh7\" (UID: \"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c\") " pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669809 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6d5ab9-4caf-418a-85e9-dc76c5b3c138-serving-cert\") pod \"service-ca-operator-777779d784-rvngm\" (UID: \"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669822 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e9adb2f-2d9c-461a-b86c-07e2325dade7-proxy-tls\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669764 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-etcd-service-ca\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.669863 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-stats-auth\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670036 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7210fed-0ce4-4a6e-98f0-3614700865e3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p2dc\" (UID: \"e7210fed-0ce4-4a6e-98f0-3614700865e3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670025 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b726080-2edc-4396-ad5a-836fb6d99418-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n2b9d\" (UID: \"3b726080-2edc-4396-ad5a-836fb6d99418\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670206 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f805118b-6de6-41c9-92c3-35acc76c5c9a-trusted-ca\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670273 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqthf\" (UniqueName: \"kubernetes.io/projected/5b837b34-03b2-4bcc-827c-fc8046263718-kube-api-access-cqthf\") pod \"package-server-manager-789f6589d5-srt7t\" (UID: \"5b837b34-03b2-4bcc-827c-fc8046263718\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/229d2695-3886-468e-80c7-69660da8e109-metrics-tls\") pod \"dns-default-r4rds\" (UID: \"229d2695-3886-468e-80c7-69660da8e109\") " pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670417 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78077a76-500c-439d-8a34-240d3af79fed-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gdctx\" (UID: \"78077a76-500c-439d-8a34-240d3af79fed\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670461 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqmsq\" (UniqueName: \"kubernetes.io/projected/cf3dfb44-c616-4b66-ac57-f3e4ecef9afc-kube-api-access-wqmsq\") pod \"olm-operator-6b444d44fb-97szk\" (UID: \"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670494 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6w2jq\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670543 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de3fc3e9-3742-497d-a3f3-73380ce16e70-cert\") pod \"ingress-canary-9nkp9\" (UID: \"de3fc3e9-3742-497d-a3f3-73380ce16e70\") " pod="openshift-ingress-canary/ingress-canary-9nkp9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670581 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flcw\" (UniqueName: \"kubernetes.io/projected/57603b1b-8b5f-45b6-96ad-f8e4bc495165-kube-api-access-4flcw\") pod \"migrator-59844c95c7-59pjb\" (UID: \"57603b1b-8b5f-45b6-96ad-f8e4bc495165\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.670603 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6d5ab9-4caf-418a-85e9-dc76c5b3c138-config\") pod \"service-ca-operator-777779d784-rvngm\" (UID: \"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.671396 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f805118b-6de6-41c9-92c3-35acc76c5c9a-trusted-ca\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.672106 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c7b70a8-7b74-4562-bc8b-bd5be42a8222-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4p8m\" (UID: \"6c7b70a8-7b74-4562-bc8b-bd5be42a8222\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.672533 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-config\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.673090 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-etcd-client\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.673427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fab99632-3a2a-40db-a351-7272d13aaa82-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.674037 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/798ea935-5c9b-4f13-9ed2-fa8cb0088895-metrics-tls\") pod \"dns-operator-744455d44c-m2s8b\" (UID: \"798ea935-5c9b-4f13-9ed2-fa8cb0088895\") " pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.674746 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-serving-cert\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.675666 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7210fed-0ce4-4a6e-98f0-3614700865e3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p2dc\" (UID: \"e7210fed-0ce4-4a6e-98f0-3614700865e3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.682788 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6d5ab9-4caf-418a-85e9-dc76c5b3c138-serving-cert\") pod \"service-ca-operator-777779d784-rvngm\" (UID: \"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.685130 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78077a76-500c-439d-8a34-240d3af79fed-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gdctx\" (UID: \"78077a76-500c-439d-8a34-240d3af79fed\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.685581 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f805118b-6de6-41c9-92c3-35acc76c5c9a-metrics-tls\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.694578 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpnqx\" (UniqueName: \"kubernetes.io/projected/89ccee90-bde1-4102-a1a6-08d2b5d80aac-kube-api-access-kpnqx\") pod \"apiserver-76f77b778f-cf2gn\" (UID: \"89ccee90-bde1-4102-a1a6-08d2b5d80aac\") " pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.698572 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.709047 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/229d2695-3886-468e-80c7-69660da8e109-config-volume\") pod \"dns-default-r4rds\" (UID: \"229d2695-3886-468e-80c7-69660da8e109\") " pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.714050 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.720741 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.739092 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.743289 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/229d2695-3886-468e-80c7-69660da8e109-metrics-tls\") pod \"dns-default-r4rds\" (UID: \"229d2695-3886-468e-80c7-69660da8e109\") " pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.771754 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:22 crc kubenswrapper[4727]: E1210 14:34:22.772000 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.271953576 +0000 UTC m=+167.466728128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.772128 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4flcw\" (UniqueName: \"kubernetes.io/projected/57603b1b-8b5f-45b6-96ad-f8e4bc495165-kube-api-access-4flcw\") pod \"migrator-59844c95c7-59pjb\" (UID: \"57603b1b-8b5f-45b6-96ad-f8e4bc495165\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.772182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23f17c89-e4df-4154-962e-11e7ca7aa4f2-signing-key\") pod \"service-ca-9c57cc56f-mh2pb\" (UID: \"23f17c89-e4df-4154-962e-11e7ca7aa4f2\") " pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.772225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d780f1ac-7f3c-4598-8234-825e81e4b9d1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4lp8p\" (UID: \"d780f1ac-7f3c-4598-8234-825e81e4b9d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.772253 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mtdq\" (UniqueName: \"kubernetes.io/projected/dc615bdc-da08-4680-afa5-d500f597d18b-kube-api-access-2mtdq\") pod \"collect-profiles-29422950-2hhnf\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.772278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73d3b8bc-b0de-43d7-b0a0-d7a298706c8c-node-bootstrap-token\") pod \"machine-config-server-hbkh7\" (UID: \"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c\") " pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.772304 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7vvl\" (UniqueName: \"kubernetes.io/projected/3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0-kube-api-access-b7vvl\") pod \"multus-admission-controller-857f4d67dd-psftx\" (UID: \"3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.772603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hhl4\" (UniqueName: \"kubernetes.io/projected/73d3b8bc-b0de-43d7-b0a0-d7a298706c8c-kube-api-access-4hhl4\") pod \"machine-config-server-hbkh7\" (UID: \"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c\") " pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.773447 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b726080-2edc-4396-ad5a-836fb6d99418-config\") pod \"kube-apiserver-operator-766d6c64bb-n2b9d\" (UID: \"3b726080-2edc-4396-ad5a-836fb6d99418\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774267 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a467255-5402-414b-9d42-0621d826aede-proxy-tls\") pod \"machine-config-controller-84d6567774-xd42k\" (UID: \"8a467255-5402-414b-9d42-0621d826aede\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8404ee16-01ed-4b08-991a-b73115123629-config\") pod \"kube-controller-manager-operator-78b949d7b-zm4xx\" (UID: \"8404ee16-01ed-4b08-991a-b73115123629\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774158 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b726080-2edc-4396-ad5a-836fb6d99418-config\") pod \"kube-apiserver-operator-766d6c64bb-n2b9d\" (UID: \"3b726080-2edc-4396-ad5a-836fb6d99418\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774638 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a467255-5402-414b-9d42-0621d826aede-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xd42k\" (UID: \"8a467255-5402-414b-9d42-0621d826aede\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774696 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/174ca2d4-702a-48fe-83d9-a9bfc1353c78-profile-collector-cert\") pod \"catalog-operator-68c6474976-5qz7j\" (UID: \"174ca2d4-702a-48fe-83d9-a9bfc1353c78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774743 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6w2jq\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774761 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-default-certificate\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774792 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-socket-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774817 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxsl\" (UniqueName: \"kubernetes.io/projected/8a467255-5402-414b-9d42-0621d826aede-kube-api-access-wxxsl\") pod \"machine-config-controller-84d6567774-xd42k\" (UID: \"8a467255-5402-414b-9d42-0621d826aede\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774857 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b726080-2edc-4396-ad5a-836fb6d99418-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n2b9d\" (UID: \"3b726080-2edc-4396-ad5a-836fb6d99418\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774928 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8404ee16-01ed-4b08-991a-b73115123629-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zm4xx\" (UID: \"8404ee16-01ed-4b08-991a-b73115123629\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-psftx\" (UID: \"3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.774983 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-852f7\" (UniqueName: \"kubernetes.io/projected/d14ab0aa-f244-4668-911a-3a54806f024f-kube-api-access-852f7\") pod \"downloads-7954f5f757-mnprj\" (UID: \"d14ab0aa-f244-4668-911a-3a54806f024f\") " pod="openshift-console/downloads-7954f5f757-mnprj" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775040 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-plugins-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775067 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2t8p\" (UniqueName: \"kubernetes.io/projected/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-kube-api-access-r2t8p\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775108 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf3dfb44-c616-4b66-ac57-f3e4ecef9afc-profile-collector-cert\") pod \"olm-operator-6b444d44fb-97szk\" (UID: \"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89063793-7aa2-4220-b653-cb480a93797f-webhook-cert\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775187 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-service-ca-bundle\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775200 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-socket-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775217 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8404ee16-01ed-4b08-991a-b73115123629-config\") pod \"kube-controller-manager-operator-78b949d7b-zm4xx\" (UID: \"8404ee16-01ed-4b08-991a-b73115123629\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775312 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a467255-5402-414b-9d42-0621d826aede-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xd42k\" (UID: \"8a467255-5402-414b-9d42-0621d826aede\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775213 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69hl4\" (UniqueName: \"kubernetes.io/projected/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-kube-api-access-69hl4\") pod \"marketplace-operator-79b997595-6w2jq\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b837b34-03b2-4bcc-827c-fc8046263718-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-srt7t\" (UID: \"5b837b34-03b2-4bcc-827c-fc8046263718\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-plugins-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: E1210 14:34:22.775460 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.275437982 +0000 UTC m=+167.470212524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775854 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d780f1ac-7f3c-4598-8234-825e81e4b9d1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4lp8p\" (UID: \"d780f1ac-7f3c-4598-8234-825e81e4b9d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775932 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/174ca2d4-702a-48fe-83d9-a9bfc1353c78-srv-cert\") pod \"catalog-operator-68c6474976-5qz7j\" (UID: \"174ca2d4-702a-48fe-83d9-a9bfc1353c78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775961 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-registration-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.775990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lqtc\" (UniqueName: \"kubernetes.io/projected/cb1bb43a-d94a-47fb-b22d-060818c735aa-kube-api-access-7lqtc\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776020 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf3dfb44-c616-4b66-ac57-f3e4ecef9afc-srv-cert\") pod \"olm-operator-6b444d44fb-97szk\" (UID: \"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776045 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8404ee16-01ed-4b08-991a-b73115123629-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zm4xx\" (UID: \"8404ee16-01ed-4b08-991a-b73115123629\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776075 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdr5w\" (UniqueName: \"kubernetes.io/projected/23f17c89-e4df-4154-962e-11e7ca7aa4f2-kube-api-access-hdr5w\") pod \"service-ca-9c57cc56f-mh2pb\" (UID: \"23f17c89-e4df-4154-962e-11e7ca7aa4f2\") " pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776119 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d780f1ac-7f3c-4598-8234-825e81e4b9d1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4lp8p\" (UID: \"d780f1ac-7f3c-4598-8234-825e81e4b9d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776144 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89063793-7aa2-4220-b653-cb480a93797f-apiservice-cert\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-csi-data-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776211 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-metrics-certs\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776249 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23f17c89-e4df-4154-962e-11e7ca7aa4f2-signing-cabundle\") pod \"service-ca-9c57cc56f-mh2pb\" (UID: \"23f17c89-e4df-4154-962e-11e7ca7aa4f2\") " pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776275 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-service-ca-bundle\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776300 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc615bdc-da08-4680-afa5-d500f597d18b-config-volume\") pod \"collect-profiles-29422950-2hhnf\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/89063793-7aa2-4220-b653-cb480a93797f-tmpfs\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776400 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6w4d\" (UniqueName: \"kubernetes.io/projected/174ca2d4-702a-48fe-83d9-a9bfc1353c78-kube-api-access-w6w4d\") pod \"catalog-operator-68c6474976-5qz7j\" (UID: \"174ca2d4-702a-48fe-83d9-a9bfc1353c78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776435 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc615bdc-da08-4680-afa5-d500f597d18b-secret-volume\") pod \"collect-profiles-29422950-2hhnf\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776457 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-mountpoint-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776481 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtkqm\" (UniqueName: \"kubernetes.io/projected/89063793-7aa2-4220-b653-cb480a93797f-kube-api-access-mtkqm\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776511 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73d3b8bc-b0de-43d7-b0a0-d7a298706c8c-certs\") pod \"machine-config-server-hbkh7\" (UID: \"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c\") " pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776541 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-stats-auth\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b726080-2edc-4396-ad5a-836fb6d99418-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n2b9d\" (UID: \"3b726080-2edc-4396-ad5a-836fb6d99418\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqthf\" (UniqueName: \"kubernetes.io/projected/5b837b34-03b2-4bcc-827c-fc8046263718-kube-api-access-cqthf\") pod \"package-server-manager-789f6589d5-srt7t\" (UID: \"5b837b34-03b2-4bcc-827c-fc8046263718\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776655 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqmsq\" (UniqueName: \"kubernetes.io/projected/cf3dfb44-c616-4b66-ac57-f3e4ecef9afc-kube-api-access-wqmsq\") pod \"olm-operator-6b444d44fb-97szk\" (UID: \"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6w2jq\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.776681 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6w2jq\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.777510 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23f17c89-e4df-4154-962e-11e7ca7aa4f2-signing-key\") pod \"service-ca-9c57cc56f-mh2pb\" (UID: \"23f17c89-e4df-4154-962e-11e7ca7aa4f2\") " pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.777811 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-registration-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.777886 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6bx2\" (UniqueName: \"kubernetes.io/projected/9d6ceeb8-a826-4a2b-99d1-19d071983122-kube-api-access-l6bx2\") pod \"console-operator-58897d9998-455f6\" (UID: \"9d6ceeb8-a826-4a2b-99d1-19d071983122\") " pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.778626 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-csi-data-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.779750 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d780f1ac-7f3c-4598-8234-825e81e4b9d1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4lp8p\" (UID: \"d780f1ac-7f3c-4598-8234-825e81e4b9d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.780103 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cb1bb43a-d94a-47fb-b22d-060818c735aa-mountpoint-dir\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.780339 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8404ee16-01ed-4b08-991a-b73115123629-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zm4xx\" (UID: \"8404ee16-01ed-4b08-991a-b73115123629\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.780417 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6w2jq\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.780986 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc615bdc-da08-4680-afa5-d500f597d18b-config-volume\") pod \"collect-profiles-29422950-2hhnf\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.781028 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/89063793-7aa2-4220-b653-cb480a93797f-tmpfs\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.781580 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/174ca2d4-702a-48fe-83d9-a9bfc1353c78-srv-cert\") pod \"catalog-operator-68c6474976-5qz7j\" (UID: \"174ca2d4-702a-48fe-83d9-a9bfc1353c78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.781742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23f17c89-e4df-4154-962e-11e7ca7aa4f2-signing-cabundle\") pod \"service-ca-9c57cc56f-mh2pb\" (UID: \"23f17c89-e4df-4154-962e-11e7ca7aa4f2\") " pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.782061 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf3dfb44-c616-4b66-ac57-f3e4ecef9afc-profile-collector-cert\") pod \"olm-operator-6b444d44fb-97szk\" (UID: \"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.783630 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89063793-7aa2-4220-b653-cb480a93797f-apiservice-cert\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.783870 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/174ca2d4-702a-48fe-83d9-a9bfc1353c78-profile-collector-cert\") pod \"catalog-operator-68c6474976-5qz7j\" (UID: \"174ca2d4-702a-48fe-83d9-a9bfc1353c78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.783896 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-psftx\" (UID: \"3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.784175 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-metrics-certs\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.784591 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d780f1ac-7f3c-4598-8234-825e81e4b9d1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4lp8p\" (UID: \"d780f1ac-7f3c-4598-8234-825e81e4b9d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.784701 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b726080-2edc-4396-ad5a-836fb6d99418-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n2b9d\" (UID: \"3b726080-2edc-4396-ad5a-836fb6d99418\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.784722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89063793-7aa2-4220-b653-cb480a93797f-webhook-cert\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.784730 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf3dfb44-c616-4b66-ac57-f3e4ecef9afc-srv-cert\") pod \"olm-operator-6b444d44fb-97szk\" (UID: \"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.784841 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc615bdc-da08-4680-afa5-d500f597d18b-secret-volume\") pod \"collect-profiles-29422950-2hhnf\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.784941 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b837b34-03b2-4bcc-827c-fc8046263718-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-srt7t\" (UID: \"5b837b34-03b2-4bcc-827c-fc8046263718\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.785724 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-default-certificate\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.786519 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a467255-5402-414b-9d42-0621d826aede-proxy-tls\") pod \"machine-config-controller-84d6567774-xd42k\" (UID: \"8a467255-5402-414b-9d42-0621d826aede\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.790367 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-stats-auth\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.796234 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k27rf\" (UniqueName: \"kubernetes.io/projected/aa4939cc-34b3-4562-9798-92d443fb76ca-kube-api-access-k27rf\") pod \"oauth-openshift-558db77b4-nv97s\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.815129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" event={"ID":"7bda144f-7fce-4ff5-9b8d-1363a5d14fc8","Type":"ContainerStarted","Data":"147a867973428c57209dd85f4a9c6847850889569fdfff4dc84ff87c3d8503da"} Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.816567 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlkh6\" (UniqueName: \"kubernetes.io/projected/e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d-kube-api-access-rlkh6\") pod \"apiserver-7bbb656c7d-f8rtv\" (UID: \"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.835636 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgrsp\" (UniqueName: \"kubernetes.io/projected/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-kube-api-access-qgrsp\") pod \"route-controller-manager-6576b87f9c-q8jzk\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.877290 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:22 crc kubenswrapper[4727]: E1210 14:34:22.879088 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.379058467 +0000 UTC m=+167.573833019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.888474 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmjl2\" (UniqueName: \"kubernetes.io/projected/ec2ea8fb-1885-4b49-8bd2-ee4a63586ade-kube-api-access-lmjl2\") pod \"machine-api-operator-5694c8668f-sjrxq\" (UID: \"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.888557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzh5\" (UniqueName: \"kubernetes.io/projected/7ff0e752-75eb-4639-a821-ccbaf0e2da51-kube-api-access-kkzh5\") pod \"machine-approver-56656f9798-8ppj4\" (UID: \"7ff0e752-75eb-4639-a821-ccbaf0e2da51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.894569 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhv9c\" (UniqueName: \"kubernetes.io/projected/f90473e8-86a2-4a1c-aadf-31d286ed0f21-kube-api-access-jhv9c\") pod \"controller-manager-879f6c89f-tzznn\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.917883 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp2dq\" (UniqueName: \"kubernetes.io/projected/198563d9-9967-47b7-aa02-c2b5be2d7c4b-kube-api-access-gp2dq\") pod \"cluster-samples-operator-665b6dd947-nfknc\" (UID: \"198563d9-9967-47b7-aa02-c2b5be2d7c4b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.934449 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4s2z\" (UniqueName: \"kubernetes.io/projected/6d8cde10-5565-4980-a4e2-a30f26707a0e-kube-api-access-t4s2z\") pod \"console-f9d7485db-swj5s\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.936212 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.943148 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.958656 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-642jv\" (UniqueName: \"kubernetes.io/projected/67d586de-34e2-49e9-b775-2484e8efffa5-kube-api-access-642jv\") pod \"openshift-apiserver-operator-796bbdcf4f-f5t57\" (UID: \"67d586de-34e2-49e9-b775-2484e8efffa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.977445 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mp4p\" (UniqueName: \"kubernetes.io/projected/f1f25eda-d2d3-4eb9-9d05-24cc0293fa37-kube-api-access-9mp4p\") pod \"openshift-config-operator-7777fb866f-2qk8s\" (UID: \"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.978438 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cf2gn"] Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.979562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:22 crc kubenswrapper[4727]: E1210 14:34:22.980097 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.480072008 +0000 UTC m=+167.674846710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.987761 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:22 crc kubenswrapper[4727]: I1210 14:34:22.996573 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.023605 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.025044 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.031058 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73d3b8bc-b0de-43d7-b0a0-d7a298706c8c-node-bootstrap-token\") pod \"machine-config-server-hbkh7\" (UID: \"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c\") " pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.041984 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.056520 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.056554 4727 request.go:700] Waited for 1.911871515s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.056962 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73d3b8bc-b0de-43d7-b0a0-d7a298706c8c-certs\") pod \"machine-config-server-hbkh7\" (UID: \"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c\") " pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.060103 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.066409 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de3fc3e9-3742-497d-a3f3-73380ce16e70-cert\") pod \"ingress-canary-9nkp9\" (UID: \"de3fc3e9-3742-497d-a3f3-73380ce16e70\") " pod="openshift-ingress-canary/ingress-canary-9nkp9" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.067442 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.079018 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.086947 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.087594 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.587569729 +0000 UTC m=+167.782344281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.087751 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.106058 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.116975 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.118714 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.126308 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.143526 4727 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.159737 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.165460 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" Dec 10 14:34:23 crc kubenswrapper[4727]: W1210 14:34:23.171456 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff0e752_75eb_4639_a821_ccbaf0e2da51.slice/crio-b228739031c88eaae8f7045eeaaa57edd6de8b5a9be8bde1523ad71d2b5bc4d7 WatchSource:0}: Error finding container b228739031c88eaae8f7045eeaaa57edd6de8b5a9be8bde1523ad71d2b5bc4d7: Status 404 returned error can't find the container with id b228739031c88eaae8f7045eeaaa57edd6de8b5a9be8bde1523ad71d2b5bc4d7 Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.174869 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.179431 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.191872 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.192385 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.692363074 +0000 UTC m=+167.887137626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.222192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-bound-sa-token\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.252892 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7fpm\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-kube-api-access-q7fpm\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.257970 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4v8n\" (UniqueName: \"kubernetes.io/projected/6c7b70a8-7b74-4562-bc8b-bd5be42a8222-kube-api-access-f4v8n\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4p8m\" (UID: \"6c7b70a8-7b74-4562-bc8b-bd5be42a8222\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.284464 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fab99632-3a2a-40db-a351-7272d13aaa82-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.295345 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.295559 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.795516188 +0000 UTC m=+167.990290890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.295759 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.296461 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.796451191 +0000 UTC m=+167.991225733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.306487 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f805118b-6de6-41c9-92c3-35acc76c5c9a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.324589 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z978f\" (UniqueName: \"kubernetes.io/projected/229d2695-3886-468e-80c7-69660da8e109-kube-api-access-z978f\") pod \"dns-default-r4rds\" (UID: \"229d2695-3886-468e-80c7-69660da8e109\") " pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.344247 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd6pl\" (UniqueName: \"kubernetes.io/projected/fab99632-3a2a-40db-a351-7272d13aaa82-kube-api-access-sd6pl\") pod \"cluster-image-registry-operator-dc59b4c8b-rzzq5\" (UID: \"fab99632-3a2a-40db-a351-7272d13aaa82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.361864 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdkjn\" (UniqueName: \"kubernetes.io/projected/5e9adb2f-2d9c-461a-b86c-07e2325dade7-kube-api-access-xdkjn\") pod \"machine-config-operator-74547568cd-sbrvz\" (UID: \"5e9adb2f-2d9c-461a-b86c-07e2325dade7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.397836 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.398527 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:23.898501838 +0000 UTC m=+168.093276390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.399354 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzv4m\" (UniqueName: \"kubernetes.io/projected/f805118b-6de6-41c9-92c3-35acc76c5c9a-kube-api-access-hzv4m\") pod \"ingress-operator-5b745b69d9-4dg2g\" (UID: \"f805118b-6de6-41c9-92c3-35acc76c5c9a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.407738 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnm62\" (UniqueName: \"kubernetes.io/projected/798ea935-5c9b-4f13-9ed2-fa8cb0088895-kube-api-access-rnm62\") pod \"dns-operator-744455d44c-m2s8b\" (UID: \"798ea935-5c9b-4f13-9ed2-fa8cb0088895\") " pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.413333 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.426458 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ksnn\" (UniqueName: \"kubernetes.io/projected/2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f-kube-api-access-7ksnn\") pod \"etcd-operator-b45778765-bs68z\" (UID: \"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.452149 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjlcx\" (UniqueName: \"kubernetes.io/projected/de3fc3e9-3742-497d-a3f3-73380ce16e70-kube-api-access-cjlcx\") pod \"ingress-canary-9nkp9\" (UID: \"de3fc3e9-3742-497d-a3f3-73380ce16e70\") " pod="openshift-ingress-canary/ingress-canary-9nkp9" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.462895 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ww2q\" (UniqueName: \"kubernetes.io/projected/e7210fed-0ce4-4a6e-98f0-3614700865e3-kube-api-access-5ww2q\") pod \"openshift-controller-manager-operator-756b6f6bc6-5p2dc\" (UID: \"e7210fed-0ce4-4a6e-98f0-3614700865e3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.472193 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tzznn"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.475045 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.485677 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.488826 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nv97s"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.489114 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.499508 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.500107 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.000087883 +0000 UTC m=+168.194862425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.508756 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-899c2\" (UniqueName: \"kubernetes.io/projected/6a6d5ab9-4caf-418a-85e9-dc76c5b3c138-kube-api-access-899c2\") pod \"service-ca-operator-777779d784-rvngm\" (UID: \"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.510677 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4m46\" (UniqueName: \"kubernetes.io/projected/78077a76-500c-439d-8a34-240d3af79fed-kube-api-access-w4m46\") pod \"kube-storage-version-migrator-operator-b67b599dd-gdctx\" (UID: \"78077a76-500c-439d-8a34-240d3af79fed\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.512512 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.521497 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4flcw\" (UniqueName: \"kubernetes.io/projected/57603b1b-8b5f-45b6-96ad-f8e4bc495165-kube-api-access-4flcw\") pod \"migrator-59844c95c7-59pjb\" (UID: \"57603b1b-8b5f-45b6-96ad-f8e4bc495165\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.528259 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9nkp9" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.539658 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mtdq\" (UniqueName: \"kubernetes.io/projected/dc615bdc-da08-4680-afa5-d500f597d18b-kube-api-access-2mtdq\") pod \"collect-profiles-29422950-2hhnf\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.556601 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.560399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7vvl\" (UniqueName: \"kubernetes.io/projected/3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0-kube-api-access-b7vvl\") pod \"multus-admission-controller-857f4d67dd-psftx\" (UID: \"3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.574493 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hhl4\" (UniqueName: \"kubernetes.io/projected/73d3b8bc-b0de-43d7-b0a0-d7a298706c8c-kube-api-access-4hhl4\") pod \"machine-config-server-hbkh7\" (UID: \"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c\") " pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.601172 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.601449 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.602312 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.102284743 +0000 UTC m=+168.297059285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.611340 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxsl\" (UniqueName: \"kubernetes.io/projected/8a467255-5402-414b-9d42-0621d826aede-kube-api-access-wxxsl\") pod \"machine-config-controller-84d6567774-xd42k\" (UID: \"8a467255-5402-414b-9d42-0621d826aede\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.625349 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.629620 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8404ee16-01ed-4b08-991a-b73115123629-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zm4xx\" (UID: \"8404ee16-01ed-4b08-991a-b73115123629\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.636934 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b726080-2edc-4396-ad5a-836fb6d99418-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n2b9d\" (UID: \"3b726080-2edc-4396-ad5a-836fb6d99418\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.648927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.653327 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69hl4\" (UniqueName: \"kubernetes.io/projected/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-kube-api-access-69hl4\") pod \"marketplace-operator-79b997595-6w2jq\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.658943 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.670878 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.688745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d780f1ac-7f3c-4598-8234-825e81e4b9d1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4lp8p\" (UID: \"d780f1ac-7f3c-4598-8234-825e81e4b9d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.689684 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.695201 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.698961 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-455f6"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.704117 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.704618 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.204599167 +0000 UTC m=+168.399373709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.708013 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6w4d\" (UniqueName: \"kubernetes.io/projected/174ca2d4-702a-48fe-83d9-a9bfc1353c78-kube-api-access-w6w4d\") pod \"catalog-operator-68c6474976-5qz7j\" (UID: \"174ca2d4-702a-48fe-83d9-a9bfc1353c78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.727463 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.727865 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdr5w\" (UniqueName: \"kubernetes.io/projected/23f17c89-e4df-4154-962e-11e7ca7aa4f2-kube-api-access-hdr5w\") pod \"service-ca-9c57cc56f-mh2pb\" (UID: \"23f17c89-e4df-4154-962e-11e7ca7aa4f2\") " pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.736051 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.744708 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqthf\" (UniqueName: \"kubernetes.io/projected/5b837b34-03b2-4bcc-827c-fc8046263718-kube-api-access-cqthf\") pod \"package-server-manager-789f6589d5-srt7t\" (UID: \"5b837b34-03b2-4bcc-827c-fc8046263718\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.752662 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.758583 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtkqm\" (UniqueName: \"kubernetes.io/projected/89063793-7aa2-4220-b653-cb480a93797f-kube-api-access-mtkqm\") pod \"packageserver-d55dfcdfc-brtj9\" (UID: \"89063793-7aa2-4220-b653-cb480a93797f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.758989 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-swj5s"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.765540 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.788128 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2t8p\" (UniqueName: \"kubernetes.io/projected/e0975aba-5e6f-47df-9c61-a5b9a447dcc8-kube-api-access-r2t8p\") pod \"router-default-5444994796-dgbsr\" (UID: \"e0975aba-5e6f-47df-9c61-a5b9a447dcc8\") " pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.800233 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.802685 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.804958 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.805252 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.305201868 +0000 UTC m=+168.499976410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.805688 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-852f7\" (UniqueName: \"kubernetes.io/projected/d14ab0aa-f244-4668-911a-3a54806f024f-kube-api-access-852f7\") pod \"downloads-7954f5f757-mnprj\" (UID: \"d14ab0aa-f244-4668-911a-3a54806f024f\") " pod="openshift-console/downloads-7954f5f757-mnprj" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.818395 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqmsq\" (UniqueName: \"kubernetes.io/projected/cf3dfb44-c616-4b66-ac57-f3e4ecef9afc-kube-api-access-wqmsq\") pod \"olm-operator-6b444d44fb-97szk\" (UID: \"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.823193 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hbkh7" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.825022 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.827167 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" event={"ID":"aa4939cc-34b3-4562-9798-92d443fb76ca","Type":"ContainerStarted","Data":"ffcfb8a3d54805344b50a4828af85d8a7dcfa6468f37013d32dbf6de38f50cfd"} Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.829440 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" event={"ID":"89ccee90-bde1-4102-a1a6-08d2b5d80aac","Type":"ContainerStarted","Data":"2a6e7cd50150a2f16d8ddd29a09e7e4c2e6f947d1eea63aec40758aca8e9dc85"} Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.829495 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" event={"ID":"89ccee90-bde1-4102-a1a6-08d2b5d80aac","Type":"ContainerStarted","Data":"b6d6cee4f2012a5aef9c9c570364c3bcd703b5cb9ca457ef3df63ea79725b58d"} Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.833985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" event={"ID":"7ff0e752-75eb-4639-a821-ccbaf0e2da51","Type":"ContainerStarted","Data":"b228739031c88eaae8f7045eeaaa57edd6de8b5a9be8bde1523ad71d2b5bc4d7"} Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.840256 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lqtc\" (UniqueName: \"kubernetes.io/projected/cb1bb43a-d94a-47fb-b22d-060818c735aa-kube-api-access-7lqtc\") pod \"csi-hostpathplugin-89q8l\" (UID: \"cb1bb43a-d94a-47fb-b22d-060818c735aa\") " pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.844453 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" event={"ID":"f90473e8-86a2-4a1c-aadf-31d286ed0f21","Type":"ContainerStarted","Data":"f2ac23ef173239bba680cdd6c8d8d693677cf05a1c0d73e21ece98a5d8a12e9c"} Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.850261 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-89q8l" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.867614 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.885543 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.893773 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sjrxq"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.907726 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:23 crc kubenswrapper[4727]: E1210 14:34:23.908227 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.408201968 +0000 UTC m=+168.602976710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.909348 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mnprj" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.925127 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.925217 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.928218 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.937418 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.965233 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv"] Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.983558 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:23 crc kubenswrapper[4727]: I1210 14:34:23.990667 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.007341 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.009985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.010336 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.510307656 +0000 UTC m=+168.705082208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.019325 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.046517 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.112143 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.112958 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.612939407 +0000 UTC m=+168.807713949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.144604 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9nkp9"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.213277 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.213527 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.713477686 +0000 UTC m=+168.908252228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.213730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.214361 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.714334427 +0000 UTC m=+168.909108969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.223029 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.286701 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r4rds"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.301568 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.314713 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.315018 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.814952049 +0000 UTC m=+169.009726591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.315191 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.315587 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.815569114 +0000 UTC m=+169.010343656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.416343 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.416982 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:24.916953244 +0000 UTC m=+169.111727786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: W1210 14:34:24.439858 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod229d2695_3886_468e_80c7_69660da8e109.slice/crio-bab6e8fcede9f762649eb2f22850f2d56c3e2a885ba401593f110d016a1e50ad WatchSource:0}: Error finding container bab6e8fcede9f762649eb2f22850f2d56c3e2a885ba401593f110d016a1e50ad: Status 404 returned error can't find the container with id bab6e8fcede9f762649eb2f22850f2d56c3e2a885ba401593f110d016a1e50ad Dec 10 14:34:24 crc kubenswrapper[4727]: W1210 14:34:24.440741 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7210fed_0ce4_4a6e_98f0_3614700865e3.slice/crio-75aed83dbc4ad80c08af79705797275a261890e7c7c796970f3bff2ef000df9a WatchSource:0}: Error finding container 75aed83dbc4ad80c08af79705797275a261890e7c7c796970f3bff2ef000df9a: Status 404 returned error can't find the container with id 75aed83dbc4ad80c08af79705797275a261890e7c7c796970f3bff2ef000df9a Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.521357 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.522007 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.021986604 +0000 UTC m=+169.216761156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.550684 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bs68z"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.622948 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.623725 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.123683912 +0000 UTC m=+169.318458454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.670861 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.692246 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-m2s8b"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.692311 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.692326 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.700021 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mh2pb"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.710248 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6w2jq"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.734396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.735047 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.235027799 +0000 UTC m=+169.429802341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.835374 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.835950 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.335896997 +0000 UTC m=+169.530671539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.887488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" event={"ID":"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade","Type":"ContainerStarted","Data":"e1433800deed58c32ca6e0fda83941222b0735a823dee43a4a3a9b510ff41daf"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.890496 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" event={"ID":"fab99632-3a2a-40db-a351-7272d13aaa82","Type":"ContainerStarted","Data":"88c34b00e1ef1d781724e6ecb928a1bae739bb6ff63dbadc12043fdaa516a87d"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.901662 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-swj5s" event={"ID":"6d8cde10-5565-4980-a4e2-a30f26707a0e","Type":"ContainerStarted","Data":"eb8dff6a41ba22b19d83a70becca8baa913c15d401b7100c72f10bb5779d00ef"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.904084 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" event={"ID":"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37","Type":"ContainerStarted","Data":"05146f9216bff72e9a37c5296930723e7cffa4e9c23e37384e73c405512d70ae"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.906253 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-455f6" event={"ID":"9d6ceeb8-a826-4a2b-99d1-19d071983122","Type":"ContainerStarted","Data":"080a010143e1fe258be44f47a6c26fdb26c2a8c88b71b1c97420fbb982edd598"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.913280 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" event={"ID":"67d586de-34e2-49e9-b775-2484e8efffa5","Type":"ContainerStarted","Data":"ffe56c1ecd96aa9c097147fd885cb760b56305b6e584f1cc4a8bd7314d67d423"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.915983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" event={"ID":"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d","Type":"ContainerStarted","Data":"590c0c7847786837305bac8b74e2e7cfd37711d4601681e0231e45782a2b170d"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.917576 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" event={"ID":"e7210fed-0ce4-4a6e-98f0-3614700865e3","Type":"ContainerStarted","Data":"75aed83dbc4ad80c08af79705797275a261890e7c7c796970f3bff2ef000df9a"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.918589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" event={"ID":"198563d9-9967-47b7-aa02-c2b5be2d7c4b","Type":"ContainerStarted","Data":"2cb9cf350ea34272ac7ed9a4eca474a44ca5db01541e757dea5e9c6233469292"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.919458 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" event={"ID":"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7","Type":"ContainerStarted","Data":"1637340f4d6e687faf64cb6b56deee020217ee2ff3a27ce6c08cbcf1bb1fc671"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.924621 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" event={"ID":"7ff0e752-75eb-4639-a821-ccbaf0e2da51","Type":"ContainerStarted","Data":"54b7660f3006334e6732de9272f151873688475371e3ba9ab58c73507ba58294"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.926025 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" event={"ID":"6c7b70a8-7b74-4562-bc8b-bd5be42a8222","Type":"ContainerStarted","Data":"f9098e8091008c7f2812127de23e77798af6ea07a7d269a5e0b80db434b1ecd2"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.927237 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" event={"ID":"5e9adb2f-2d9c-461a-b86c-07e2325dade7","Type":"ContainerStarted","Data":"afe2c519de76986c8bd831c65c3e565d72863731a79f0982267a08a1a7da2f95"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.933774 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" event={"ID":"aa4939cc-34b3-4562-9798-92d443fb76ca","Type":"ContainerStarted","Data":"c3b9b52e32c30652f33052438c98b4b800e09cd906e1cb1d8c9107bec99ab3d0"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.934302 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.934328 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx"] Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.941955 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" event={"ID":"f805118b-6de6-41c9-92c3-35acc76c5c9a","Type":"ContainerStarted","Data":"9d1d4243cc4600e35e86ddb6b4109ee3335828c0d62c457d9da308555e540ca0"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.944615 4727 generic.go:334] "Generic (PLEG): container finished" podID="89ccee90-bde1-4102-a1a6-08d2b5d80aac" containerID="2a6e7cd50150a2f16d8ddd29a09e7e4c2e6f947d1eea63aec40758aca8e9dc85" exitCode=0 Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.944701 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" event={"ID":"89ccee90-bde1-4102-a1a6-08d2b5d80aac","Type":"ContainerDied","Data":"2a6e7cd50150a2f16d8ddd29a09e7e4c2e6f947d1eea63aec40758aca8e9dc85"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.945274 4727 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nv97s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.945362 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.947202 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r4rds" event={"ID":"229d2695-3886-468e-80c7-69660da8e109","Type":"ContainerStarted","Data":"bab6e8fcede9f762649eb2f22850f2d56c3e2a885ba401593f110d016a1e50ad"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.947056 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:24 crc kubenswrapper[4727]: E1210 14:34:24.947817 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.447798837 +0000 UTC m=+169.642573379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.950255 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" event={"ID":"f90473e8-86a2-4a1c-aadf-31d286ed0f21","Type":"ContainerStarted","Data":"6da90353bb68a06771f43ac5133c41e6bae7d398c9df8133a2517f16a8f5a23c"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.950821 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.954278 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9nkp9" event={"ID":"de3fc3e9-3742-497d-a3f3-73380ce16e70","Type":"ContainerStarted","Data":"2f397a81a289399fb884eefa4b81c6e3442581d556ecc34bc3f469ddf76bace6"} Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.955774 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tzznn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 10 14:34:24 crc kubenswrapper[4727]: I1210 14:34:24.955855 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" podUID="f90473e8-86a2-4a1c-aadf-31d286ed0f21" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.049855 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.051647 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.551610198 +0000 UTC m=+169.746384740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: W1210 14:34:25.125712 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8404ee16_01ed_4b08_991a_b73115123629.slice/crio-ffa0f316cd3916e8f92fa4159c37e0673d14c7099c01382f46338ea92189a06e WatchSource:0}: Error finding container ffa0f316cd3916e8f92fa4159c37e0673d14c7099c01382f46338ea92189a06e: Status 404 returned error can't find the container with id ffa0f316cd3916e8f92fa4159c37e0673d14c7099c01382f46338ea92189a06e Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.151922 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.152977 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.652956237 +0000 UTC m=+169.847730779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: W1210 14:34:25.157509 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b726080_2edc_4396_ad5a_836fb6d99418.slice/crio-ab6dcde5d2355fc66649fabd1523fb2072ad012c8ca6e2d407a959c2edc48bfe WatchSource:0}: Error finding container ab6dcde5d2355fc66649fabd1523fb2072ad012c8ca6e2d407a959c2edc48bfe: Status 404 returned error can't find the container with id ab6dcde5d2355fc66649fabd1523fb2072ad012c8ca6e2d407a959c2edc48bfe Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.161777 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-89q8l"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.183391 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.192165 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-psftx"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.202081 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rvngm"] Dec 10 14:34:25 crc kubenswrapper[4727]: W1210 14:34:25.250669 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb1bb43a_d94a_47fb_b22d_060818c735aa.slice/crio-713382ecf80eb21b0002d4c0a96a982cc60a538c9ccaff318578883d6d85c02a WatchSource:0}: Error finding container 713382ecf80eb21b0002d4c0a96a982cc60a538c9ccaff318578883d6d85c02a: Status 404 returned error can't find the container with id 713382ecf80eb21b0002d4c0a96a982cc60a538c9ccaff318578883d6d85c02a Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.255590 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.255690 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.7556652 +0000 UTC m=+169.950439742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.255955 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.256399 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.756390168 +0000 UTC m=+169.951164710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.262768 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.295224 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mnprj"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.352493 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljsv" podStartSLOduration=143.352464566 podStartE2EDuration="2m23.352464566s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:25.349827551 +0000 UTC m=+169.544602093" watchObservedRunningTime="2025-12-10 14:34:25.352464566 +0000 UTC m=+169.547239108" Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.358065 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.365497 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.865451358 +0000 UTC m=+170.060225900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.392838 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.422240 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.429839 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.459821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.460368 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.960348208 +0000 UTC m=+170.155122750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.525133 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" podStartSLOduration=143.525104041 podStartE2EDuration="2m23.525104041s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:25.524685281 +0000 UTC m=+169.719459823" watchObservedRunningTime="2025-12-10 14:34:25.525104041 +0000 UTC m=+169.719878583" Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.548973 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.562452 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.563822 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.062857046 +0000 UTC m=+170.257631708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: W1210 14:34:25.643408 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod174ca2d4_702a_48fe_83d9_a9bfc1353c78.slice/crio-fba2e8e2ec978930f326feb2e5cc3eed9380a0343c092afe52b601126f5d8433 WatchSource:0}: Error finding container fba2e8e2ec978930f326feb2e5cc3eed9380a0343c092afe52b601126f5d8433: Status 404 returned error can't find the container with id fba2e8e2ec978930f326feb2e5cc3eed9380a0343c092afe52b601126f5d8433 Dec 10 14:34:25 crc kubenswrapper[4727]: W1210 14:34:25.648497 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a467255_5402_414b_9d42_0621d826aede.slice/crio-0e5c247e80b13da9369e9d366989ff274780ac2dc7dea3c4675283f659232092 WatchSource:0}: Error finding container 0e5c247e80b13da9369e9d366989ff274780ac2dc7dea3c4675283f659232092: Status 404 returned error can't find the container with id 0e5c247e80b13da9369e9d366989ff274780ac2dc7dea3c4675283f659232092 Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.658005 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" podStartSLOduration=143.657971851 podStartE2EDuration="2m23.657971851s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:25.627381633 +0000 UTC m=+169.822156175" watchObservedRunningTime="2025-12-10 14:34:25.657971851 +0000 UTC m=+169.852746393" Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.661296 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9"] Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.670468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.671076 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.171060625 +0000 UTC m=+170.365835167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.771713 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.771890 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.271866061 +0000 UTC m=+170.466640603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.772320 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.772754 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.272741412 +0000 UTC m=+170.467515954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.873787 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.874084 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.37403718 +0000 UTC m=+170.568811742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.874659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.875093 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.375076046 +0000 UTC m=+170.569850588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.975940 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.976118 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.476088427 +0000 UTC m=+170.670862969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:25 crc kubenswrapper[4727]: I1210 14:34:25.976263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:25 crc kubenswrapper[4727]: E1210 14:34:25.976601 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.476592539 +0000 UTC m=+170.671367081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.078199 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.078571 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.578469722 +0000 UTC m=+170.773244264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.078641 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.079237 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.579224171 +0000 UTC m=+170.773998893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.179922 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.180197 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.68015713 +0000 UTC m=+170.874931672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.180282 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.180739 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.680720554 +0000 UTC m=+170.875495266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.282128 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.282245 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.782217997 +0000 UTC m=+170.976992539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.282483 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.282869 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.782857222 +0000 UTC m=+170.977631764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.384166 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.384380 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.884335665 +0000 UTC m=+171.079110207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.384620 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.385034 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.885021762 +0000 UTC m=+171.079796504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.485843 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.486146 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.986097834 +0000 UTC m=+171.180872386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.486205 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.486743 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:26.986726429 +0000 UTC m=+171.181501161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.588078 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.588371 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.088314674 +0000 UTC m=+171.283089226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.588466 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.588536 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.589325 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.089298559 +0000 UTC m=+171.284073311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.689872 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.690150 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.190109175 +0000 UTC m=+171.384883707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.690222 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.690693 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.190672129 +0000 UTC m=+171.385446851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.792461 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.792888 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.292813928 +0000 UTC m=+171.487588470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.793503 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.794232 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.294156011 +0000 UTC m=+171.488930553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.894728 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.895193 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.395144941 +0000 UTC m=+171.589919493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.895388 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.895954 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.395940301 +0000 UTC m=+171.590715033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.996857 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.997142 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.497100796 +0000 UTC m=+171.691875378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:26 crc kubenswrapper[4727]: I1210 14:34:26.997312 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:26 crc kubenswrapper[4727]: E1210 14:34:26.997806 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.497788483 +0000 UTC m=+171.692563055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.098712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.099033 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.598987378 +0000 UTC m=+171.793761920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.099469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.100049 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.600029174 +0000 UTC m=+171.794803916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.201287 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.201564 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.701526127 +0000 UTC m=+171.896300669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.201895 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.202394 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.702380398 +0000 UTC m=+171.897154950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.303241 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.303675 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.803616835 +0000 UTC m=+171.998391387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.303827 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.304260 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.80424104 +0000 UTC m=+171.999015782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.418376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.418553 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.91851599 +0000 UTC m=+172.113290532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.419114 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.419637 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:27.919596286 +0000 UTC m=+172.114370828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.519960 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.520199 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.020165166 +0000 UTC m=+172.214939718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.520461 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.520854 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.020838923 +0000 UTC m=+172.215613465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.621945 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.622227 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.122177422 +0000 UTC m=+172.316951964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.622289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.622680 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.122671874 +0000 UTC m=+172.317446416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.716032 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bcea03d-69bd-4530-91b9-ca3ba1ffc871-metrics-certs\") pod \"network-metrics-daemon-wwmwn\" (UID: \"2bcea03d-69bd-4530-91b9-ca3ba1ffc871\") " pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.724623 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.724743 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.224707431 +0000 UTC m=+172.419481983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.724930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.725533 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.22550315 +0000 UTC m=+172.420277872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.767176 4727 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.204s" Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.767235 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" event={"ID":"78077a76-500c-439d-8a34-240d3af79fed","Type":"ContainerStarted","Data":"4589a27ebd166edde0d29eb4f35a29626bb7aa3b4ec7138485e2c6186bc22afe"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.769479 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" event={"ID":"174ca2d4-702a-48fe-83d9-a9bfc1353c78","Type":"ContainerStarted","Data":"fba2e8e2ec978930f326feb2e5cc3eed9380a0343c092afe52b601126f5d8433"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.779523 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" event={"ID":"8404ee16-01ed-4b08-991a-b73115123629","Type":"ContainerStarted","Data":"ffa0f316cd3916e8f92fa4159c37e0673d14c7099c01382f46338ea92189a06e"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.794630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9nkp9" event={"ID":"de3fc3e9-3742-497d-a3f3-73380ce16e70","Type":"ContainerStarted","Data":"658df9cb92b32c26466bbd67915845dc3e4c8b11bca2933bb6249f519512dee9"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.804152 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" event={"ID":"521cc0a4-1afa-4ef6-bdd6-37c60f87273f","Type":"ContainerStarted","Data":"966cd666910bd7b6520c8e2ff66d3a121971d9a91099799f0ec8a266cfe16b80"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.813985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" event={"ID":"3b726080-2edc-4396-ad5a-836fb6d99418","Type":"ContainerStarted","Data":"ab6dcde5d2355fc66649fabd1523fb2072ad012c8ca6e2d407a959c2edc48bfe"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.815351 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" event={"ID":"d780f1ac-7f3c-4598-8234-825e81e4b9d1","Type":"ContainerStarted","Data":"8a60c1b1f3e0b755c8d62af885af26526a28b388382ecd3a50f896ccf4159096"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.816688 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" event={"ID":"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade","Type":"ContainerStarted","Data":"506d39d37c6c0c3d9ecd0a4c0869abfdae82ab0caeeaed827f946256a14c932b"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.818072 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" event={"ID":"67d586de-34e2-49e9-b775-2484e8efffa5","Type":"ContainerStarted","Data":"37b4289cc03d0859089126d5700c4116e9ac1343bf6ca27692a18b331180e1fa"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.825465 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.826484 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.32644565 +0000 UTC m=+172.521220192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.838418 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" event={"ID":"3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0","Type":"ContainerStarted","Data":"7e6f17931c3a9af8db4104fd850622eecc5610da8c39c6ff121f1f90effd59a0"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.848634 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" event={"ID":"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138","Type":"ContainerStarted","Data":"176ca3f884e88e0c58bbe4046e8f0374e82740e0acef47af7b6ee38d711ae4a2"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.865705 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" event={"ID":"e7210fed-0ce4-4a6e-98f0-3614700865e3","Type":"ContainerStarted","Data":"f0da0273149f763c90f50f0301b36654b6f177634566ea6cb53dd894621efead"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.871187 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hbkh7" event={"ID":"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c","Type":"ContainerStarted","Data":"96547a9afb5eb296c87c888f01c3981ee9459f548a2193e65cf31dcafd38ea5e"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.873986 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" event={"ID":"5b837b34-03b2-4bcc-827c-fc8046263718","Type":"ContainerStarted","Data":"2043357d2c65b8990f99a1cb3249fdc545c42c0a9f519d48e310cd003e2d2252"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.875149 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" event={"ID":"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc","Type":"ContainerStarted","Data":"5027d7f0a3833bb133abac9b2c8533fc5df215df424a433907b4591e19492380"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.876610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" event={"ID":"798ea935-5c9b-4f13-9ed2-fa8cb0088895","Type":"ContainerStarted","Data":"044987959b1bb5254ebeda1d0f98a8efacc957bdf670a833ce3b649ff232ca42"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.876898 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wwmwn" Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.878609 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" event={"ID":"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7","Type":"ContainerStarted","Data":"85afa56485e59cf4c3d0b94215f6de2978430d1ee236ab646c04d05363b69987"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.880444 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.880526 4727 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q8jzk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.880561 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" podUID="2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.883872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mnprj" event={"ID":"d14ab0aa-f244-4668-911a-3a54806f024f","Type":"ContainerStarted","Data":"a90e15e586c2b186076b21de8a9d7fd5542f0f6830b091245caf5e97471fdebb"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.884817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" event={"ID":"23f17c89-e4df-4154-962e-11e7ca7aa4f2","Type":"ContainerStarted","Data":"9a0024859db62084302cdaacc91dabdf34adf6b76c97734ef3238d238f222df1"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.890859 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" event={"ID":"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37","Type":"ContainerStarted","Data":"788edb69cc479bfb53ca0c4fd1187b4796476bae10fd53a771b763645b9e777d"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.892098 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-89q8l" event={"ID":"cb1bb43a-d94a-47fb-b22d-060818c735aa","Type":"ContainerStarted","Data":"713382ecf80eb21b0002d4c0a96a982cc60a538c9ccaff318578883d6d85c02a"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.893237 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-swj5s" event={"ID":"6d8cde10-5565-4980-a4e2-a30f26707a0e","Type":"ContainerStarted","Data":"2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.893876 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb" event={"ID":"57603b1b-8b5f-45b6-96ad-f8e4bc495165","Type":"ContainerStarted","Data":"9ed8b8cc7e51e7c61d95a1b5063358803e077925fe3de338344d32ae0946dc45"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.897053 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" event={"ID":"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f","Type":"ContainerStarted","Data":"480d619b53ab0a44ec7be1ecf4896e2ffd5aa33d880654311d40f47645052250"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.898366 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" event={"ID":"8a467255-5402-414b-9d42-0621d826aede","Type":"ContainerStarted","Data":"0e5c247e80b13da9369e9d366989ff274780ac2dc7dea3c4675283f659232092"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.900226 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" event={"ID":"5e9adb2f-2d9c-461a-b86c-07e2325dade7","Type":"ContainerStarted","Data":"3a474a1e20eb5afe97e7643309aa2c805f4d85744266a5c478435993aefbba47"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.901326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" event={"ID":"dc615bdc-da08-4680-afa5-d500f597d18b","Type":"ContainerStarted","Data":"ee6b040cb6292a6cd84cfee48fe8e46bcf5b18969ac85a2bbcfc92d1d5c9575e"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.912350 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" event={"ID":"198563d9-9967-47b7-aa02-c2b5be2d7c4b","Type":"ContainerStarted","Data":"12be6b07cdaa3b4b19211cc25a91aa7440584f8c635708efc8119900386dc763"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.913777 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-455f6" event={"ID":"9d6ceeb8-a826-4a2b-99d1-19d071983122","Type":"ContainerStarted","Data":"0f7127a318d3d0c7853f9c18290d34e47f7453a089a90cd63f02924926dd0a9e"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.914784 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.915149 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dgbsr" event={"ID":"e0975aba-5e6f-47df-9c61-a5b9a447dcc8","Type":"ContainerStarted","Data":"2a52d93223dde1e7a95c40cf6addaf947f30e46516693777f5a594cfd552029c"} Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.917587 4727 patch_prober.go:28] interesting pod/console-operator-58897d9998-455f6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.917663 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-455f6" podUID="9d6ceeb8-a826-4a2b-99d1-19d071983122" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.923538 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:34:27 crc kubenswrapper[4727]: I1210 14:34:27.927757 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:27 crc kubenswrapper[4727]: E1210 14:34:27.932339 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.432314451 +0000 UTC m=+172.627089193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.030281 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.030391 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.530370519 +0000 UTC m=+172.725145071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.030595 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.031025 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.530994254 +0000 UTC m=+172.725768956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.099489 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-f5t57" podStartSLOduration=147.099461439 podStartE2EDuration="2m27.099461439s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:28.096711111 +0000 UTC m=+172.291485653" watchObservedRunningTime="2025-12-10 14:34:28.099461439 +0000 UTC m=+172.294235981" Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.136003 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.137256 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.637231104 +0000 UTC m=+172.832005646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.164618 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-455f6" podStartSLOduration=146.164588152 podStartE2EDuration="2m26.164588152s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:28.123514775 +0000 UTC m=+172.318289317" watchObservedRunningTime="2025-12-10 14:34:28.164588152 +0000 UTC m=+172.359362694" Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.192639 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" podStartSLOduration=146.192604015 podStartE2EDuration="2m26.192604015s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:28.181786228 +0000 UTC m=+172.376560780" watchObservedRunningTime="2025-12-10 14:34:28.192604015 +0000 UTC m=+172.387378557" Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.237729 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.241094 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.241177 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.741157608 +0000 UTC m=+172.935932150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.251654 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5p2dc" podStartSLOduration=146.251617157 podStartE2EDuration="2m26.251617157s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:28.205801112 +0000 UTC m=+172.400575654" watchObservedRunningTime="2025-12-10 14:34:28.251617157 +0000 UTC m=+172.446391699" Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.342455 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.343486 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.843380269 +0000 UTC m=+173.038154811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.446127 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.446812 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:28.946791039 +0000 UTC m=+173.141565581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.547937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.548196 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.048172639 +0000 UTC m=+173.242947181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.652987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.653764 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.153747553 +0000 UTC m=+173.348522095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.754729 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.755110 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.255087612 +0000 UTC m=+173.449862154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.755239 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.758525 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.258512627 +0000 UTC m=+173.453287169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.855898 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.856398 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.35637714 +0000 UTC m=+173.551151682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.940939 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" event={"ID":"89063793-7aa2-4220-b653-cb480a93797f","Type":"ContainerStarted","Data":"b19b3450204a28057f50597975a074f4cbe75352bab6ef5dbb7361a1d5ec7b72"} Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.942848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r4rds" event={"ID":"229d2695-3886-468e-80c7-69660da8e109","Type":"ContainerStarted","Data":"f8e322d6e177fe9ba492c5b569f4c666ec1a9bb7b63d6bf9f107639cf4d6ba3b"} Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.947480 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" event={"ID":"f805118b-6de6-41c9-92c3-35acc76c5c9a","Type":"ContainerStarted","Data":"8e161f92ba96663b2788a073061a46f408fdbf3ccecd70c52651f564342461d0"} Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.962189 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:28 crc kubenswrapper[4727]: E1210 14:34:28.962794 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.462741823 +0000 UTC m=+173.657516365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.967045 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" event={"ID":"7ff0e752-75eb-4639-a821-ccbaf0e2da51","Type":"ContainerStarted","Data":"a30d957a141589c125f07cded9296ae6709d001644c068639308fda5e19d40ee"} Dec 10 14:34:28 crc kubenswrapper[4727]: I1210 14:34:28.976527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" event={"ID":"6c7b70a8-7b74-4562-bc8b-bd5be42a8222","Type":"ContainerStarted","Data":"fcb54136618ab92644bfbbac5c7c09ff18f2929e67fc9ef38f00d270ffb0e312"} Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.002590 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" event={"ID":"fab99632-3a2a-40db-a351-7272d13aaa82","Type":"ContainerStarted","Data":"a6dddb6c12bfe36e80ff3b693b338a027cb153cb0103462986ec4357856d947d"} Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.035239 4727 generic.go:334] "Generic (PLEG): container finished" podID="f1f25eda-d2d3-4eb9-9d05-24cc0293fa37" containerID="788edb69cc479bfb53ca0c4fd1187b4796476bae10fd53a771b763645b9e777d" exitCode=0 Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.036275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" event={"ID":"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37","Type":"ContainerDied","Data":"788edb69cc479bfb53ca0c4fd1187b4796476bae10fd53a771b763645b9e777d"} Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.063543 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.065157 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.565120628 +0000 UTC m=+173.759895160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.065472 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.068543 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.568527033 +0000 UTC m=+173.763301795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.068573 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8ppj4" podStartSLOduration=147.068536563 podStartE2EDuration="2m27.068536563s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:29.007531062 +0000 UTC m=+173.202305604" watchObservedRunningTime="2025-12-10 14:34:29.068536563 +0000 UTC m=+173.263311105" Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.116277 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzzq5" podStartSLOduration=147.116251414 podStartE2EDuration="2m27.116251414s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:29.108080472 +0000 UTC m=+173.302855014" watchObservedRunningTime="2025-12-10 14:34:29.116251414 +0000 UTC m=+173.311025956" Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.118637 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4p8m" podStartSLOduration=147.118626003 podStartE2EDuration="2m27.118626003s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:29.067391834 +0000 UTC m=+173.262166386" watchObservedRunningTime="2025-12-10 14:34:29.118626003 +0000 UTC m=+173.313400545" Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.138458 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" event={"ID":"dc615bdc-da08-4680-afa5-d500f597d18b","Type":"ContainerStarted","Data":"362c86ff2ae56d728e14e21cff89033742d4381bf06ebf12435a98a04e511c4b"} Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.166388 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d" containerID="f2ac853b6d4ba6fea105eb054da2a01f79cf4646dc2843c0ea92f8e7d7f241d6" exitCode=0 Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.167152 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" event={"ID":"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d","Type":"ContainerDied","Data":"f2ac853b6d4ba6fea105eb054da2a01f79cf4646dc2843c0ea92f8e7d7f241d6"} Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.173501 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.174273 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.67424322 +0000 UTC m=+173.869017762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.175685 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.176274 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.67625075 +0000 UTC m=+173.871025292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.201818 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" podStartSLOduration=147.201793572 podStartE2EDuration="2m27.201793572s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:29.171752728 +0000 UTC m=+173.366527270" watchObservedRunningTime="2025-12-10 14:34:29.201793572 +0000 UTC m=+173.396568114" Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.215343 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-455f6" Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.235823 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9nkp9" podStartSLOduration=9.235787624 podStartE2EDuration="9.235787624s" podCreationTimestamp="2025-12-10 14:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:29.232310968 +0000 UTC m=+173.427085530" watchObservedRunningTime="2025-12-10 14:34:29.235787624 +0000 UTC m=+173.430562166" Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.267952 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-swj5s" podStartSLOduration=147.26792701 podStartE2EDuration="2m27.26792701s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:29.266055703 +0000 UTC m=+173.460830245" watchObservedRunningTime="2025-12-10 14:34:29.26792701 +0000 UTC m=+173.462701552" Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.289242 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.289705 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.789661908 +0000 UTC m=+173.984436450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.290011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.290359 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.790352195 +0000 UTC m=+173.985126737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.391646 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.392086 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.892064433 +0000 UTC m=+174.086838975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.494524 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.495154 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:29.995133555 +0000 UTC m=+174.189908107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.596376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.596755 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.096734711 +0000 UTC m=+174.291509253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.700508 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.701001 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.200978532 +0000 UTC m=+174.395753064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.753668 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wwmwn"] Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.803589 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.804026 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.304000602 +0000 UTC m=+174.498775144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:29 crc kubenswrapper[4727]: I1210 14:34:29.906885 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:29 crc kubenswrapper[4727]: E1210 14:34:29.907375 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.407351911 +0000 UTC m=+174.602126453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.007846 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.008395 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.508374973 +0000 UTC m=+174.703149515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.083948 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.111978 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.112415 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.612392347 +0000 UTC m=+174.807166889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.218055 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.218848 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.718818292 +0000 UTC m=+174.913592834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.279506 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" event={"ID":"521cc0a4-1afa-4ef6-bdd6-37c60f87273f","Type":"ContainerStarted","Data":"271c82a18c9ea81712e271d706eaac8c5afdc8a0019476be6ae6c9b343efe524"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.326218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.327019 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.827002251 +0000 UTC m=+175.021776793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.387696 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hbkh7" event={"ID":"73d3b8bc-b0de-43d7-b0a0-d7a298706c8c","Type":"ContainerStarted","Data":"16c5f1f3b6eeca074707cac3133f3aecb633a55827bc72a6f42cc06b72e9d909"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.400559 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" event={"ID":"ec2ea8fb-1885-4b49-8bd2-ee4a63586ade","Type":"ContainerStarted","Data":"11e73e934469cad5aaa47311a9277476268548c2cc8e8e8304bb999d0be5e9b1"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.411550 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" event={"ID":"798ea935-5c9b-4f13-9ed2-fa8cb0088895","Type":"ContainerStarted","Data":"762e5ceed4b33a9e2a3d408f578d0414a60bef39559bc92cd7684656ad428bd8"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.413872 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hbkh7" podStartSLOduration=10.413857521 podStartE2EDuration="10.413857521s" podCreationTimestamp="2025-12-10 14:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.413343228 +0000 UTC m=+174.608117780" watchObservedRunningTime="2025-12-10 14:34:30.413857521 +0000 UTC m=+174.608632063" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.427578 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" event={"ID":"8404ee16-01ed-4b08-991a-b73115123629","Type":"ContainerStarted","Data":"d5b37e676aad369b73310e9a38bb209c6c86e9be3675049696b2c2fa49aa7c26"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.432174 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.432746 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.932700527 +0000 UTC m=+175.127475249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.433136 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.433690 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:30.933676232 +0000 UTC m=+175.128450974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.467403 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-sjrxq" podStartSLOduration=148.467364996 podStartE2EDuration="2m28.467364996s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.462488805 +0000 UTC m=+174.657263347" watchObservedRunningTime="2025-12-10 14:34:30.467364996 +0000 UTC m=+174.662139538" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.489704 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mnprj" event={"ID":"d14ab0aa-f244-4668-911a-3a54806f024f","Type":"ContainerStarted","Data":"38b5fb2b9d661e9cea02fb04e344ee336818a6a3d1b29f4a84a1a070d9fc1d98"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.498727 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mnprj" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.498792 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" event={"ID":"6a6d5ab9-4caf-418a-85e9-dc76c5b3c138","Type":"ContainerStarted","Data":"9b9dbf167dcbbd09a3bb8242135b5efc154eda0478bbf0ebfbd12cfa423c1ad2"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.505435 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb" event={"ID":"57603b1b-8b5f-45b6-96ad-f8e4bc495165","Type":"ContainerStarted","Data":"a5b29d975bbe8e6297a388d8341c4ca390c6385ea3d412df9e396e0386def0d9"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.511420 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.511634 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.533889 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.535930 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.035873032 +0000 UTC m=+175.230647754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.601742 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" event={"ID":"5b837b34-03b2-4bcc-827c-fc8046263718","Type":"ContainerStarted","Data":"38eb61705a9f56281a69f5a3bf238a6b4af54ad7ef7f3b87b9691daa00bfa1b7"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.621425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" event={"ID":"2d21defc-c060-4ab1-bedb-0b6ef2fb6b0f","Type":"ContainerStarted","Data":"8b3f99a56ea86322df6dca5bde0d9ea56a6bbd98a1a11f428237c22d7595aa02"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.622371 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zm4xx" podStartSLOduration=148.622345983 podStartE2EDuration="2m28.622345983s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.520366148 +0000 UTC m=+174.715140700" watchObservedRunningTime="2025-12-10 14:34:30.622345983 +0000 UTC m=+174.817120525" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.642524 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.643012 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.142996284 +0000 UTC m=+175.337770826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.686126 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rvngm" podStartSLOduration=148.686099931 podStartE2EDuration="2m28.686099931s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.641996079 +0000 UTC m=+174.836770631" watchObservedRunningTime="2025-12-10 14:34:30.686099931 +0000 UTC m=+174.880874473" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.717229 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bs68z" podStartSLOduration=148.717206872 podStartE2EDuration="2m28.717206872s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.716538805 +0000 UTC m=+174.911313357" watchObservedRunningTime="2025-12-10 14:34:30.717206872 +0000 UTC m=+174.911981414" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.722242 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mnprj" podStartSLOduration=148.722226176 podStartE2EDuration="2m28.722226176s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.692300555 +0000 UTC m=+174.887075097" watchObservedRunningTime="2025-12-10 14:34:30.722226176 +0000 UTC m=+174.917000718" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.741830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" event={"ID":"89ccee90-bde1-4102-a1a6-08d2b5d80aac","Type":"ContainerStarted","Data":"809a7e59b255926cff8b1930c7422db335c483a88d2acb7afdb661d24a976bb6"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.745530 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.747694 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.247638925 +0000 UTC m=+175.442413467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.818427 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" event={"ID":"78077a76-500c-439d-8a34-240d3af79fed","Type":"ContainerStarted","Data":"85ef766cbd02677348971f04a6701f90fc54a42ec463a86e41b6d8be2adcef70"} Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.837339 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dgbsr" podStartSLOduration=148.837313595 podStartE2EDuration="2m28.837313595s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.836945176 +0000 UTC m=+175.031719728" watchObservedRunningTime="2025-12-10 14:34:30.837313595 +0000 UTC m=+175.032088137" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.847729 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.849051 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.349033476 +0000 UTC m=+175.543808018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.889544 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gdctx" podStartSLOduration=148.889513698 podStartE2EDuration="2m28.889513698s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.886540574 +0000 UTC m=+175.081315116" watchObservedRunningTime="2025-12-10 14:34:30.889513698 +0000 UTC m=+175.084288240" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.922771 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.924185 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" podStartSLOduration=148.924163276 podStartE2EDuration="2m28.924163276s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.921231573 +0000 UTC m=+175.116006125" watchObservedRunningTime="2025-12-10 14:34:30.924163276 +0000 UTC m=+175.118937818" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.932324 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.934957 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.935046 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.942207 4727 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-brtj9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" start-of-body= Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.942303 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" podUID="89063793-7aa2-4220-b653-cb480a93797f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" Dec 10 14:34:30 crc kubenswrapper[4727]: I1210 14:34:30.950488 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:30 crc kubenswrapper[4727]: E1210 14:34:30.951876 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.451840371 +0000 UTC m=+175.646615063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.012173 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" event={"ID":"23f17c89-e4df-4154-962e-11e7ca7aa4f2","Type":"ContainerStarted","Data":"c27aa135f4790ed13dd1b4c34adb8ae9a229e27afef6000a7d8db39f4709b179"} Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.030229 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" event={"ID":"2bcea03d-69bd-4530-91b9-ca3ba1ffc871","Type":"ContainerStarted","Data":"ae83e6b8f6a7790a5384cddabba2ef920d1c91c2f96ff14af5c647182c5e6ee9"} Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.033712 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mh2pb" podStartSLOduration=149.033695968 podStartE2EDuration="2m29.033695968s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:31.031969975 +0000 UTC m=+175.226744517" watchObservedRunningTime="2025-12-10 14:34:31.033695968 +0000 UTC m=+175.228470510" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.037214 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" event={"ID":"3b726080-2edc-4396-ad5a-836fb6d99418","Type":"ContainerStarted","Data":"5068df525fe9b48f399db336e56f88ce48f71a23b30f57b414cf5802cd795d2c"} Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.044094 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" podStartSLOduration=149.044068524 podStartE2EDuration="2m29.044068524s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:30.983950686 +0000 UTC m=+175.178725228" watchObservedRunningTime="2025-12-10 14:34:31.044068524 +0000 UTC m=+175.238843056" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.057620 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.058380 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n2b9d" podStartSLOduration=149.058357718 podStartE2EDuration="2m29.058357718s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:31.056458081 +0000 UTC m=+175.251232623" watchObservedRunningTime="2025-12-10 14:34:31.058357718 +0000 UTC m=+175.253132250" Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.059279 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.559258861 +0000 UTC m=+175.754033403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.067565 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.079076 4727 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5qz7j container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.079148 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" podUID="174ca2d4-702a-48fe-83d9-a9bfc1353c78" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.095781 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" podStartSLOduration=149.095756514 podStartE2EDuration="2m29.095756514s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:31.093455267 +0000 UTC m=+175.288229809" watchObservedRunningTime="2025-12-10 14:34:31.095756514 +0000 UTC m=+175.290531056" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.096920 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" event={"ID":"cf3dfb44-c616-4b66-ac57-f3e4ecef9afc","Type":"ContainerStarted","Data":"782512e286cfd4310c75db571b261b82c2f01d022e3e198a6410148d99427411"} Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.096983 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.101176 4727 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-97szk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.101262 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" podUID="cf3dfb44-c616-4b66-ac57-f3e4ecef9afc" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.120473 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" podStartSLOduration=149.120449836 podStartE2EDuration="2m29.120449836s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:31.117951634 +0000 UTC m=+175.312726176" watchObservedRunningTime="2025-12-10 14:34:31.120449836 +0000 UTC m=+175.315224378" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.159447 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.160829 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.660795415 +0000 UTC m=+175.855569957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.167644 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" podStartSLOduration=149.167623194 podStartE2EDuration="2m29.167623194s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:31.14040122 +0000 UTC m=+175.335175762" watchObservedRunningTime="2025-12-10 14:34:31.167623194 +0000 UTC m=+175.362397746" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.245269 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" podStartSLOduration=149.245247426 podStartE2EDuration="2m29.245247426s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:31.208237219 +0000 UTC m=+175.403011761" watchObservedRunningTime="2025-12-10 14:34:31.245247426 +0000 UTC m=+175.440021968" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.246656 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" podStartSLOduration=149.24665012 podStartE2EDuration="2m29.24665012s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:31.243651666 +0000 UTC m=+175.438426208" watchObservedRunningTime="2025-12-10 14:34:31.24665012 +0000 UTC m=+175.441424662" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.262088 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.262499 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.762483472 +0000 UTC m=+175.957258014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.364613 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.365353 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.865328799 +0000 UTC m=+176.060103351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.466380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.467045 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:31.967025667 +0000 UTC m=+176.161800209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.567281 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.567660 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.067639438 +0000 UTC m=+176.262413980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.668612 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.669086 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.169065369 +0000 UTC m=+176.363839911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.770576 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.771295 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.27126587 +0000 UTC m=+176.466040412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.872817 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.873333 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.373313696 +0000 UTC m=+176.568088238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.970728 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:31 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Dec 10 14:34:31 crc kubenswrapper[4727]: [+]process-running ok Dec 10 14:34:31 crc kubenswrapper[4727]: healthz check failed Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.970822 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:31 crc kubenswrapper[4727]: I1210 14:34:31.974287 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:31 crc kubenswrapper[4727]: E1210 14:34:31.974784 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.474761428 +0000 UTC m=+176.669535980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.076268 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:32 crc kubenswrapper[4727]: E1210 14:34:32.076797 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.576779714 +0000 UTC m=+176.771554256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.124018 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sbrvz" event={"ID":"5e9adb2f-2d9c-461a-b86c-07e2325dade7","Type":"ContainerStarted","Data":"dce8a148e24c53f00a75e8fea3823a7e675513f2550913992fdb1bf0fd825e49"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.126736 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4lp8p" event={"ID":"d780f1ac-7f3c-4598-8234-825e81e4b9d1","Type":"ContainerStarted","Data":"2cf39297549bbe1ef544b2b262e10dc54ed4c99cb38165d3fd1f5e4827b63ad0"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.129473 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" event={"ID":"198563d9-9967-47b7-aa02-c2b5be2d7c4b","Type":"ContainerStarted","Data":"104c9d0a34aeccf5c4338852db379e30f15e620b6575782d8a523c12170e4e59"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.132677 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" event={"ID":"89ccee90-bde1-4102-a1a6-08d2b5d80aac","Type":"ContainerStarted","Data":"b4606c433dfa30e877696e9fbfd656f5f25c1fcecf3f2122acd195a1a99c04d0"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.139530 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r4rds" event={"ID":"229d2695-3886-468e-80c7-69660da8e109","Type":"ContainerStarted","Data":"8cfdd73465de702958261e29fcd568a954976427a6480d2afbd86d475a47ed70"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.140432 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.147491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" event={"ID":"e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d","Type":"ContainerStarted","Data":"e2a433b2a307c66a1408b02c64460bc6981fe007035dd5d197c26151c72ff8f3"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.152758 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" event={"ID":"8a467255-5402-414b-9d42-0621d826aede","Type":"ContainerStarted","Data":"b9ec18015c4764f1c438949b2107ae72172ae4ec23af6cd70098a0cb61543f40"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.152813 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" event={"ID":"8a467255-5402-414b-9d42-0621d826aede","Type":"ContainerStarted","Data":"c77dd1c84e5631b8f06cbfc5d47f2c8e9213f1803b81445f12db8f1086a1622c"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.154665 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4dg2g" event={"ID":"f805118b-6de6-41c9-92c3-35acc76c5c9a","Type":"ContainerStarted","Data":"84673194090a867984c95a693895cb06f61e78aead50bd65a9bed0867393fea6"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.156805 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" event={"ID":"2bcea03d-69bd-4530-91b9-ca3ba1ffc871","Type":"ContainerStarted","Data":"a491edc4551407ff81d5dbf27bf814b069a562431db30bad89910f9412fce92e"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.169887 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" event={"ID":"798ea935-5c9b-4f13-9ed2-fa8cb0088895","Type":"ContainerStarted","Data":"fe1c3f3e558920bcad380da450321aa762b78660f335c73fee38958bc3134803"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.177977 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:32 crc kubenswrapper[4727]: E1210 14:34:32.179602 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.679578339 +0000 UTC m=+176.874352881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.193877 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-89q8l" event={"ID":"cb1bb43a-d94a-47fb-b22d-060818c735aa","Type":"ContainerStarted","Data":"5c296b883401865eac4ec236b334ef4e0e7381bbf833219accec501c34f2bdf6"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.198627 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" podStartSLOduration=150.19860043 podStartE2EDuration="2m30.19860043s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.19696865 +0000 UTC m=+176.391743202" watchObservedRunningTime="2025-12-10 14:34:32.19860043 +0000 UTC m=+176.393374982" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.208010 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" event={"ID":"89063793-7aa2-4220-b653-cb480a93797f","Type":"ContainerStarted","Data":"6fcefa78c53bed2c80f1299683892126daa1b7942dbc23a2ad95a3e9f9ffa52a"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.220928 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" event={"ID":"5b837b34-03b2-4bcc-827c-fc8046263718","Type":"ContainerStarted","Data":"546db0789aa0d5c56724ea49c52a74da1941fceb59b057b4f54407489e92805c"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.221541 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.227083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" event={"ID":"3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0","Type":"ContainerStarted","Data":"7e68169e3ca3d169ba62455f244a3d750365b3080cf98c97280cf81623bf2a98"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.227130 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" event={"ID":"3f4cb8e1-b64a-4d17-a3b3-b1dcf4cc75b0","Type":"ContainerStarted","Data":"55e5a94f4242ec617ebcd3e098e9ef4cb7f16dd937d184b00ac80afd28e1935a"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.235003 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dgbsr" event={"ID":"e0975aba-5e6f-47df-9c61-a5b9a447dcc8","Type":"ContainerStarted","Data":"52ca8c55a95d664c6fbf10a003623d56523b1ff370c354b34a94830b9fb22689"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.266194 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-r4rds" podStartSLOduration=12.266167053 podStartE2EDuration="12.266167053s" podCreationTimestamp="2025-12-10 14:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.262617995 +0000 UTC m=+176.457392557" watchObservedRunningTime="2025-12-10 14:34:32.266167053 +0000 UTC m=+176.460941595" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.274618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" event={"ID":"174ca2d4-702a-48fe-83d9-a9bfc1353c78","Type":"ContainerStarted","Data":"f74b03e1610a5c3032f9a883bb9329f2a560cf6b8d40942c50bb9fba7d9b9034"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.280319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:32 crc kubenswrapper[4727]: E1210 14:34:32.282099 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.782085207 +0000 UTC m=+176.976859749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.294727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb" event={"ID":"57603b1b-8b5f-45b6-96ad-f8e4bc495165","Type":"ContainerStarted","Data":"6dfb5a1e53055c594214f8d9f70d1f11ccd595bc03b48cd719eeaca956552a36"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.305089 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" event={"ID":"f1f25eda-d2d3-4eb9-9d05-24cc0293fa37","Type":"ContainerStarted","Data":"df1d367f1b2af99795df09de379ecc496190c6a7987cb5621df74d1e45235f9d"} Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.305137 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.306287 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.306365 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.308435 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.315311 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.315423 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6w2jq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.315461 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.322290 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97szk" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.357247 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xd42k" podStartSLOduration=150.357220507 podStartE2EDuration="2m30.357220507s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.354557161 +0000 UTC m=+176.549331713" watchObservedRunningTime="2025-12-10 14:34:32.357220507 +0000 UTC m=+176.551995049" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.362214 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" podStartSLOduration=150.36217749 podStartE2EDuration="2m30.36217749s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.322660882 +0000 UTC m=+176.517435424" watchObservedRunningTime="2025-12-10 14:34:32.36217749 +0000 UTC m=+176.556952122" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.381693 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:32 crc kubenswrapper[4727]: E1210 14:34:32.383650 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.883622151 +0000 UTC m=+177.078396693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.413840 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wwmwn" podStartSLOduration=150.413814678 podStartE2EDuration="2m30.413814678s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.412334592 +0000 UTC m=+176.607109154" watchObservedRunningTime="2025-12-10 14:34:32.413814678 +0000 UTC m=+176.608589220" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.489137 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:32 crc kubenswrapper[4727]: E1210 14:34:32.489882 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:32.989865171 +0000 UTC m=+177.184639713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.532238 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-m2s8b" podStartSLOduration=150.53220878 podStartE2EDuration="2m30.53220878s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.465250182 +0000 UTC m=+176.660024724" watchObservedRunningTime="2025-12-10 14:34:32.53220878 +0000 UTC m=+176.726983322" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.586564 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" podStartSLOduration=150.586539995 podStartE2EDuration="2m30.586539995s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.531372279 +0000 UTC m=+176.726146821" watchObservedRunningTime="2025-12-10 14:34:32.586539995 +0000 UTC m=+176.781314537" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.587565 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-psftx" podStartSLOduration=150.58755929 podStartE2EDuration="2m30.58755929s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.583308765 +0000 UTC m=+176.778083337" watchObservedRunningTime="2025-12-10 14:34:32.58755929 +0000 UTC m=+176.782333822" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.590693 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:32 crc kubenswrapper[4727]: E1210 14:34:32.608976 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.108945 +0000 UTC m=+177.303719542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.672447 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" podStartSLOduration=150.672415281 podStartE2EDuration="2m30.672415281s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.65054935 +0000 UTC m=+176.845323892" watchObservedRunningTime="2025-12-10 14:34:32.672415281 +0000 UTC m=+176.867189823" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.703017 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:32 crc kubenswrapper[4727]: E1210 14:34:32.703617 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.203595673 +0000 UTC m=+177.398370225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.714971 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.715022 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.715454 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-59pjb" podStartSLOduration=150.715420866 podStartE2EDuration="2m30.715420866s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.712828202 +0000 UTC m=+176.907602744" watchObservedRunningTime="2025-12-10 14:34:32.715420866 +0000 UTC m=+176.910195408" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.804266 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:32 crc kubenswrapper[4727]: E1210 14:34:32.820055 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.319993395 +0000 UTC m=+177.514767957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.862137 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" podStartSLOduration=150.862107818 podStartE2EDuration="2m30.862107818s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:32.803478866 +0000 UTC m=+176.998253418" watchObservedRunningTime="2025-12-10 14:34:32.862107818 +0000 UTC m=+177.056882360" Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.905804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:32 crc kubenswrapper[4727]: E1210 14:34:32.906364 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.406342763 +0000 UTC m=+177.601117305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.943265 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:32 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Dec 10 14:34:32 crc kubenswrapper[4727]: [+]process-running ok Dec 10 14:34:32 crc kubenswrapper[4727]: healthz check failed Dec 10 14:34:32 crc kubenswrapper[4727]: I1210 14:34:32.943340 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.010242 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.010796 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.510771809 +0000 UTC m=+177.705546351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.111982 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.112556 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.612525728 +0000 UTC m=+177.807300390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.118065 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.118135 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.176597 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.176696 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.178524 4727 patch_prober.go:28] interesting pod/console-f9d7485db-swj5s container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.178629 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-swj5s" podUID="6d8cde10-5565-4980-a4e2-a30f26707a0e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.209547 4727 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-brtj9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.209673 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" podUID="89063793-7aa2-4220-b653-cb480a93797f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.213338 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.213686 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.713649212 +0000 UTC m=+177.908423764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.213782 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.214172 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.714162435 +0000 UTC m=+177.908936977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.311484 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wwmwn" event={"ID":"2bcea03d-69bd-4530-91b9-ca3ba1ffc871","Type":"ContainerStarted","Data":"e84c3fdc53e6f131491d28d2660e7072e4e6bff328bba62af73c5a214b6be67d"} Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.312728 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.312771 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.313074 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6w2jq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.313143 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.314715 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.314889 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.814861348 +0000 UTC m=+178.009635890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.315002 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.315392 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.815376411 +0000 UTC m=+178.010150953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.319831 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk8s" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.416586 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.416850 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.916812962 +0000 UTC m=+178.111587504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.417545 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.419989 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:33.91996325 +0000 UTC m=+178.114737792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.519831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.520019 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.019988967 +0000 UTC m=+178.214763509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.520238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.520586 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.020575391 +0000 UTC m=+178.215349933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.621505 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.621726 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.121688904 +0000 UTC m=+178.316463446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.621793 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.622192 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.122175496 +0000 UTC m=+178.316950038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.672745 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6w2jq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.672841 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.672745 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6w2jq container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.672900 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.709187 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p5v95"] Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.710216 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.722624 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.723033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.723203 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.223163016 +0000 UTC m=+178.417937558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.723488 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.723884 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.223876764 +0000 UTC m=+178.418651306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.736137 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5v95"] Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.824951 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.825166 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.32512416 +0000 UTC m=+178.519898702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.825230 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.825337 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-catalog-content\") pod \"community-operators-p5v95\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.825387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5m8\" (UniqueName: \"kubernetes.io/projected/2484515c-1846-4e63-9747-bc6dc81a574c-kube-api-access-nc5m8\") pod \"community-operators-p5v95\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.825468 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-utilities\") pod \"community-operators-p5v95\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.825590 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.325572592 +0000 UTC m=+178.520347134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.910742 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.910820 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.910744 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.910941 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.926400 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.926630 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.426591413 +0000 UTC m=+178.621365965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.926696 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5m8\" (UniqueName: \"kubernetes.io/projected/2484515c-1846-4e63-9747-bc6dc81a574c-kube-api-access-nc5m8\") pod \"community-operators-p5v95\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.926862 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-utilities\") pod \"community-operators-p5v95\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.927135 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.927218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-catalog-content\") pod \"community-operators-p5v95\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.927323 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-utilities\") pod \"community-operators-p5v95\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.927539 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-catalog-content\") pod \"community-operators-p5v95\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:33 crc kubenswrapper[4727]: E1210 14:34:33.927567 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.427555327 +0000 UTC m=+178.622329869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.929915 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.932111 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:33 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Dec 10 14:34:33 crc kubenswrapper[4727]: [+]process-running ok Dec 10 14:34:33 crc kubenswrapper[4727]: healthz check failed Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.932180 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:33 crc kubenswrapper[4727]: I1210 14:34:33.959106 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5m8\" (UniqueName: \"kubernetes.io/projected/2484515c-1846-4e63-9747-bc6dc81a574c-kube-api-access-nc5m8\") pod \"community-operators-p5v95\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.025415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.028575 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.028814 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.528774413 +0000 UTC m=+178.723548955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.028956 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.029358 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.529349887 +0000 UTC m=+178.724124429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.107641 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xfkcf"] Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.109302 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.129026 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfkcf"] Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.131538 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.132581 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.632547692 +0000 UTC m=+178.827322404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.133131 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.138821 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.637654879 +0000 UTC m=+178.832429421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.241449 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.741414948 +0000 UTC m=+178.936189490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.242218 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.243721 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tggrq\" (UniqueName: \"kubernetes.io/projected/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-kube-api-access-tggrq\") pod \"community-operators-xfkcf\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.243897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.245422 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.745400056 +0000 UTC m=+178.940174598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.243993 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-catalog-content\") pod \"community-operators-xfkcf\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.246050 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-utilities\") pod \"community-operators-xfkcf\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.311255 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g7k77"] Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.313030 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.313148 4727 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-brtj9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.313568 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" podUID="89063793-7aa2-4220-b653-cb480a93797f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.315222 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.329367 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6w2jq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.329436 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.346434 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7k77"] Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.347324 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.347611 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tggrq\" (UniqueName: \"kubernetes.io/projected/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-kube-api-access-tggrq\") pod \"community-operators-xfkcf\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.348760 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.848727175 +0000 UTC m=+179.043501717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.349259 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-catalog-content\") pod \"community-operators-xfkcf\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.349370 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-utilities\") pod \"community-operators-xfkcf\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.350296 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-utilities\") pod \"community-operators-xfkcf\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.350767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-catalog-content\") pod \"community-operators-xfkcf\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.385163 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tggrq\" (UniqueName: \"kubernetes.io/projected/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-kube-api-access-tggrq\") pod \"community-operators-xfkcf\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.432293 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.451630 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-catalog-content\") pod \"certified-operators-g7k77\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.452027 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vqw\" (UniqueName: \"kubernetes.io/projected/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-kube-api-access-n4vqw\") pod \"certified-operators-g7k77\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.452125 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-utilities\") pod \"certified-operators-g7k77\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.452292 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.459780 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:34.959760184 +0000 UTC m=+179.154534726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.511391 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5v95"] Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.526566 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s2hk4"] Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.527621 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.555522 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.555879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-catalog-content\") pod \"certified-operators-g7k77\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.555943 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vqw\" (UniqueName: \"kubernetes.io/projected/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-kube-api-access-n4vqw\") pod \"certified-operators-g7k77\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.555980 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-utilities\") pod \"certified-operators-g7k77\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.556509 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-utilities\") pod \"certified-operators-g7k77\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.556623 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.056595651 +0000 UTC m=+179.251370193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.556884 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-catalog-content\") pod \"certified-operators-g7k77\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.558613 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2hk4"] Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.605841 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vqw\" (UniqueName: \"kubernetes.io/projected/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-kube-api-access-n4vqw\") pod \"certified-operators-g7k77\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.645794 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.659794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-catalog-content\") pod \"certified-operators-s2hk4\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.659878 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.660084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57x4b\" (UniqueName: \"kubernetes.io/projected/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-kube-api-access-57x4b\") pod \"certified-operators-s2hk4\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.660116 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-utilities\") pod \"certified-operators-s2hk4\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.660539 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.160521234 +0000 UTC m=+179.355295766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.740955 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfkcf"] Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.761755 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.762262 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57x4b\" (UniqueName: \"kubernetes.io/projected/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-kube-api-access-57x4b\") pod \"certified-operators-s2hk4\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.762322 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-utilities\") pod \"certified-operators-s2hk4\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.762354 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-catalog-content\") pod \"certified-operators-s2hk4\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.763032 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-catalog-content\") pod \"certified-operators-s2hk4\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.763270 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.263216037 +0000 UTC m=+179.457990579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.815889 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-utilities\") pod \"certified-operators-s2hk4\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.836332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57x4b\" (UniqueName: \"kubernetes.io/projected/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-kube-api-access-57x4b\") pod \"certified-operators-s2hk4\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.848148 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.863628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.864227 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.364202147 +0000 UTC m=+179.558976889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.886674 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-brtj9" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.936344 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:34 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Dec 10 14:34:34 crc kubenswrapper[4727]: [+]process-running ok Dec 10 14:34:34 crc kubenswrapper[4727]: healthz check failed Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.936449 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.972081 4727 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cf2gn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 10 14:34:34 crc kubenswrapper[4727]: [+]log ok Dec 10 14:34:34 crc kubenswrapper[4727]: [+]etcd ok Dec 10 14:34:34 crc kubenswrapper[4727]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 10 14:34:34 crc kubenswrapper[4727]: [+]poststarthook/generic-apiserver-start-informers ok Dec 10 14:34:34 crc kubenswrapper[4727]: [+]poststarthook/max-in-flight-filter ok Dec 10 14:34:34 crc kubenswrapper[4727]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 10 14:34:34 crc kubenswrapper[4727]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 10 14:34:34 crc kubenswrapper[4727]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 10 14:34:34 crc kubenswrapper[4727]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 10 14:34:34 crc kubenswrapper[4727]: [+]poststarthook/project.openshift.io-projectcache ok Dec 10 14:34:34 crc kubenswrapper[4727]: [-]poststarthook/project.openshift.io-projectauthorizationcache failed: reason withheld Dec 10 14:34:34 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-startinformers ok Dec 10 14:34:34 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 10 14:34:34 crc kubenswrapper[4727]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 10 14:34:34 crc kubenswrapper[4727]: livez check failed Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.972191 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" podUID="89ccee90-bde1-4102-a1a6-08d2b5d80aac" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:34 crc kubenswrapper[4727]: I1210 14:34:34.973885 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:34 crc kubenswrapper[4727]: E1210 14:34:34.975690 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.475657097 +0000 UTC m=+179.670431639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.005866 4727 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-f8rtv container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 10 14:34:35 crc kubenswrapper[4727]: [+]log ok Dec 10 14:34:35 crc kubenswrapper[4727]: [+]etcd ok Dec 10 14:34:35 crc kubenswrapper[4727]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 10 14:34:35 crc kubenswrapper[4727]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Dec 10 14:34:35 crc kubenswrapper[4727]: [+]poststarthook/max-in-flight-filter ok Dec 10 14:34:35 crc kubenswrapper[4727]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 10 14:34:35 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-StartUserInformer ok Dec 10 14:34:35 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-StartOAuthInformer ok Dec 10 14:34:35 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Dec 10 14:34:35 crc kubenswrapper[4727]: livez check failed Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.005967 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" podUID="e7ff80f5-3bb1-4f57-82bf-86cae2ecb87d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.075887 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.076393 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.57633935 +0000 UTC m=+179.771113892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.178009 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.178368 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.678326215 +0000 UTC m=+179.873100757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.203067 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7k77"] Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.281472 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.281957 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.78194021 +0000 UTC m=+179.976714742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.284372 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2hk4"] Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.332864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfkcf" event={"ID":"98bd9482-a59d-4e44-ba30-6a0277bcb2ae","Type":"ContainerStarted","Data":"0042ebdcc288a9e1e32de1f86655fa3676ed693ad643bf1c98015f20593d01d6"} Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.343307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-89q8l" event={"ID":"cb1bb43a-d94a-47fb-b22d-060818c735aa","Type":"ContainerStarted","Data":"76e45dc5e71ab73bc37914637f7bcf6c3cd6c1d0a6fe5f4388b2269eacab7812"} Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.345187 4727 generic.go:334] "Generic (PLEG): container finished" podID="dc615bdc-da08-4680-afa5-d500f597d18b" containerID="362c86ff2ae56d728e14e21cff89033742d4381bf06ebf12435a98a04e511c4b" exitCode=0 Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.345309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" event={"ID":"dc615bdc-da08-4680-afa5-d500f597d18b","Type":"ContainerDied","Data":"362c86ff2ae56d728e14e21cff89033742d4381bf06ebf12435a98a04e511c4b"} Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.346861 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v95" event={"ID":"2484515c-1846-4e63-9747-bc6dc81a574c","Type":"ContainerStarted","Data":"9c5622da37dcf0695e48e47a1f2958b55f5b1b63aff2ae011a4cb46074050878"} Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.383348 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.383555 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.883525405 +0000 UTC m=+180.078299957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.383736 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.384164 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.884155661 +0000 UTC m=+180.078930203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.485114 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.485815 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:35.985797148 +0000 UTC m=+180.180571690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.587658 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.087616769 +0000 UTC m=+180.282391311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.587174 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.666523 4727 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.693798 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.694045 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.194007163 +0000 UTC m=+180.388781705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.694224 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.694680 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.194661549 +0000 UTC m=+180.389436101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.796929 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.797565 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.297505955 +0000 UTC m=+180.492280487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.808631 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.810032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.818257 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.844698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.846763 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.898706 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.898919 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.899319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:35 crc kubenswrapper[4727]: E1210 14:34:35.900030 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.400009783 +0000 UTC m=+180.594784325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.932637 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:35 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Dec 10 14:34:35 crc kubenswrapper[4727]: [+]process-running ok Dec 10 14:34:35 crc kubenswrapper[4727]: healthz check failed Dec 10 14:34:35 crc kubenswrapper[4727]: I1210 14:34:35.932726 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.000373 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.000588 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.000641 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.000782 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.500753698 +0000 UTC m=+180.695528240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.000797 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.019802 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.102237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.102674 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.602660381 +0000 UTC m=+180.797434923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.107288 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-thvgk"] Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.108515 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.111982 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.121643 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thvgk"] Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.163710 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.202771 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.203309 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.703266232 +0000 UTC m=+180.898040774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.203377 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.203646 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-catalog-content\") pod \"redhat-marketplace-thvgk\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.203824 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.703802445 +0000 UTC m=+180.898577177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.204545 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjdfb\" (UniqueName: \"kubernetes.io/projected/517d22f8-c007-4428-82f1-1fe55445d509-kube-api-access-fjdfb\") pod \"redhat-marketplace-thvgk\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.204595 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-utilities\") pod \"redhat-marketplace-thvgk\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.306474 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.306703 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.806647361 +0000 UTC m=+181.001421943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.307269 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-catalog-content\") pod \"redhat-marketplace-thvgk\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.307316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjdfb\" (UniqueName: \"kubernetes.io/projected/517d22f8-c007-4428-82f1-1fe55445d509-kube-api-access-fjdfb\") pod \"redhat-marketplace-thvgk\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.307362 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-utilities\") pod \"redhat-marketplace-thvgk\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.307450 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.307767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-catalog-content\") pod \"redhat-marketplace-thvgk\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.307880 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.807867312 +0000 UTC m=+181.002641854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.307936 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-utilities\") pod \"redhat-marketplace-thvgk\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.326650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjdfb\" (UniqueName: \"kubernetes.io/projected/517d22f8-c007-4428-82f1-1fe55445d509-kube-api-access-fjdfb\") pod \"redhat-marketplace-thvgk\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.357526 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2hk4" event={"ID":"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96","Type":"ContainerStarted","Data":"94e5e8f5f38763b965382517dfce42503f1a8efe07a97b6e1cd30824a43787b1"} Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.358741 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7k77" event={"ID":"d3b0146e-cc5a-48ef-904a-b2d28a6720f3","Type":"ContainerStarted","Data":"28a0a8696c8faa89efe7cd83544dfd4d2c9ecf26ae15c363c1268984f19d5e30"} Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.374344 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.408600 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.409116 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:36.909087498 +0000 UTC m=+181.103862060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.431366 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.439340 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.516265 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.517155 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:37.017133693 +0000 UTC m=+181.211908245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.543211 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ns2g4"] Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.544371 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.610454 4727 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-10T14:34:35.666873151Z","Handler":null,"Name":""} Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.619296 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.619533 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:37.119496897 +0000 UTC m=+181.314271439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.619836 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c98zl\" (UniqueName: \"kubernetes.io/projected/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-kube-api-access-c98zl\") pod \"redhat-marketplace-ns2g4\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.619932 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.619976 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-utilities\") pod \"redhat-marketplace-ns2g4\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.620003 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-catalog-content\") pod \"redhat-marketplace-ns2g4\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: E1210 14:34:36.620440 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:37.120430671 +0000 UTC m=+181.315205213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n87wt" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.623619 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns2g4"] Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.648551 4727 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.648609 4727 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.696136 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.720857 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.721161 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-catalog-content\") pod \"redhat-marketplace-ns2g4\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.721242 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c98zl\" (UniqueName: \"kubernetes.io/projected/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-kube-api-access-c98zl\") pod \"redhat-marketplace-ns2g4\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.721326 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-utilities\") pod \"redhat-marketplace-ns2g4\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.723583 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-utilities\") pod \"redhat-marketplace-ns2g4\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.726083 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-catalog-content\") pod \"redhat-marketplace-ns2g4\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.738545 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.749608 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c98zl\" (UniqueName: \"kubernetes.io/projected/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-kube-api-access-c98zl\") pod \"redhat-marketplace-ns2g4\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.758455 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thvgk"] Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.822448 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc615bdc-da08-4680-afa5-d500f597d18b-config-volume\") pod \"dc615bdc-da08-4680-afa5-d500f597d18b\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.822535 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mtdq\" (UniqueName: \"kubernetes.io/projected/dc615bdc-da08-4680-afa5-d500f597d18b-kube-api-access-2mtdq\") pod \"dc615bdc-da08-4680-afa5-d500f597d18b\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.822604 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc615bdc-da08-4680-afa5-d500f597d18b-secret-volume\") pod \"dc615bdc-da08-4680-afa5-d500f597d18b\" (UID: \"dc615bdc-da08-4680-afa5-d500f597d18b\") " Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.823071 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.824089 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc615bdc-da08-4680-afa5-d500f597d18b-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc615bdc-da08-4680-afa5-d500f597d18b" (UID: "dc615bdc-da08-4680-afa5-d500f597d18b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.827633 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc615bdc-da08-4680-afa5-d500f597d18b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc615bdc-da08-4680-afa5-d500f597d18b" (UID: "dc615bdc-da08-4680-afa5-d500f597d18b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.829980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc615bdc-da08-4680-afa5-d500f597d18b-kube-api-access-2mtdq" (OuterVolumeSpecName: "kube-api-access-2mtdq") pod "dc615bdc-da08-4680-afa5-d500f597d18b" (UID: "dc615bdc-da08-4680-afa5-d500f597d18b"). InnerVolumeSpecName "kube-api-access-2mtdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.924394 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc615bdc-da08-4680-afa5-d500f597d18b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.924434 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mtdq\" (UniqueName: \"kubernetes.io/projected/dc615bdc-da08-4680-afa5-d500f597d18b-kube-api-access-2mtdq\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.924448 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc615bdc-da08-4680-afa5-d500f597d18b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.929870 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.935967 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:36 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Dec 10 14:34:36 crc kubenswrapper[4727]: [+]process-running ok Dec 10 14:34:36 crc kubenswrapper[4727]: healthz check failed Dec 10 14:34:36 crc kubenswrapper[4727]: I1210 14:34:36.936061 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.066281 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.066995 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.193685 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns2g4"] Dec 10 14:34:37 crc kubenswrapper[4727]: W1210 14:34:37.204082 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4da708e0_26ae_4bf4_ab5c_ca793fc6e207.slice/crio-ff508e9bba23a442780d3eee6e1299fb898916fd776216b02fe009896b8e7e03 WatchSource:0}: Error finding container ff508e9bba23a442780d3eee6e1299fb898916fd776216b02fe009896b8e7e03: Status 404 returned error can't find the container with id ff508e9bba23a442780d3eee6e1299fb898916fd776216b02fe009896b8e7e03 Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.232161 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n87wt\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.311276 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qn5hx"] Dec 10 14:34:37 crc kubenswrapper[4727]: E1210 14:34:37.311999 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc615bdc-da08-4680-afa5-d500f597d18b" containerName="collect-profiles" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.312024 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc615bdc-da08-4680-afa5-d500f597d18b" containerName="collect-profiles" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.312265 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc615bdc-da08-4680-afa5-d500f597d18b" containerName="collect-profiles" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.313414 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.315707 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.339556 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qn5hx"] Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.374619 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.378825 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns2g4" event={"ID":"4da708e0-26ae-4bf4-ab5c-ca793fc6e207","Type":"ContainerStarted","Data":"ff508e9bba23a442780d3eee6e1299fb898916fd776216b02fe009896b8e7e03"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.382596 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.382985 4727 generic.go:334] "Generic (PLEG): container finished" podID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerID="74f546d67665f1ed34428d9c57ee8f2f99b69e32bba2d8c08d9618aaa57c5b71" exitCode=0 Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.383642 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2hk4" event={"ID":"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96","Type":"ContainerDied","Data":"74f546d67665f1ed34428d9c57ee8f2f99b69e32bba2d8c08d9618aaa57c5b71"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.386708 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.390483 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" event={"ID":"dc615bdc-da08-4680-afa5-d500f597d18b","Type":"ContainerDied","Data":"ee6b040cb6292a6cd84cfee48fe8e46bcf5b18969ac85a2bbcfc92d1d5c9575e"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.390549 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee6b040cb6292a6cd84cfee48fe8e46bcf5b18969ac85a2bbcfc92d1d5c9575e" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.390688 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.404518 4727 generic.go:334] "Generic (PLEG): container finished" podID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerID="4de3f7a881f15b06f689bf45a69d548cfe92ad861f3d7f2e208e923fd8ccc3e1" exitCode=0 Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.405551 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7k77" event={"ID":"d3b0146e-cc5a-48ef-904a-b2d28a6720f3","Type":"ContainerDied","Data":"4de3f7a881f15b06f689bf45a69d548cfe92ad861f3d7f2e208e923fd8ccc3e1"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.435892 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqgx\" (UniqueName: \"kubernetes.io/projected/7f245f78-d777-49e5-8bf1-69a6bb04943b-kube-api-access-pmqgx\") pod \"redhat-operators-qn5hx\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.436035 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-catalog-content\") pod \"redhat-operators-qn5hx\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.436101 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-utilities\") pod \"redhat-operators-qn5hx\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.452978 4727 generic.go:334] "Generic (PLEG): container finished" podID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerID="382c359e531d100c90d1519f7d51016503308a8283ce725ea287a7bde57ae21c" exitCode=0 Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.453115 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfkcf" event={"ID":"98bd9482-a59d-4e44-ba30-6a0277bcb2ae","Type":"ContainerDied","Data":"382c359e531d100c90d1519f7d51016503308a8283ce725ea287a7bde57ae21c"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.467408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thvgk" event={"ID":"517d22f8-c007-4428-82f1-1fe55445d509","Type":"ContainerStarted","Data":"4da39b19c6fcf3cd388a534956b70c4187f8e3f8bd0f237b77d2b8a791dc123c"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.467478 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thvgk" event={"ID":"517d22f8-c007-4428-82f1-1fe55445d509","Type":"ContainerStarted","Data":"b68889f4ad8cea047ad882f1fc4477fb9e6ad8e3a7b973f1c9133253ba6e8da4"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.482632 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-89q8l" event={"ID":"cb1bb43a-d94a-47fb-b22d-060818c735aa","Type":"ContainerStarted","Data":"bfd0ceb2a5bad6efcc93b4dfa1d1dba7311983799c8bfb29cb9171abc647e720"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.486129 4727 generic.go:334] "Generic (PLEG): container finished" podID="2484515c-1846-4e63-9747-bc6dc81a574c" containerID="4caecf48c0a7a1a8cf7dfeaa150d06e438708679329263aaf510b2ca053a510e" exitCode=0 Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.486214 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v95" event={"ID":"2484515c-1846-4e63-9747-bc6dc81a574c","Type":"ContainerDied","Data":"4caecf48c0a7a1a8cf7dfeaa150d06e438708679329263aaf510b2ca053a510e"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.492626 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8","Type":"ContainerStarted","Data":"e7395a5b49d9aee8c5bf467a7788c388b8a963f848f101b2805555907316fc4a"} Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.537448 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqgx\" (UniqueName: \"kubernetes.io/projected/7f245f78-d777-49e5-8bf1-69a6bb04943b-kube-api-access-pmqgx\") pod \"redhat-operators-qn5hx\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.537531 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-catalog-content\") pod \"redhat-operators-qn5hx\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.537593 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-utilities\") pod \"redhat-operators-qn5hx\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.538207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-utilities\") pod \"redhat-operators-qn5hx\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.538585 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-catalog-content\") pod \"redhat-operators-qn5hx\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.568598 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqgx\" (UniqueName: \"kubernetes.io/projected/7f245f78-d777-49e5-8bf1-69a6bb04943b-kube-api-access-pmqgx\") pod \"redhat-operators-qn5hx\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.678764 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.726242 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.726488 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.726590 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.740046 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cf2gn" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.756607 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4sqds"] Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.758134 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.783513 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4sqds"] Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.847863 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-utilities\") pod \"redhat-operators-4sqds\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.848014 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-catalog-content\") pod \"redhat-operators-4sqds\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.848098 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhbg\" (UniqueName: \"kubernetes.io/projected/dea7b38c-5f36-498f-93c4-23e849473cb4-kube-api-access-nzhbg\") pod \"redhat-operators-4sqds\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.928137 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.951343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-catalog-content\") pod \"redhat-operators-4sqds\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.951436 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzhbg\" (UniqueName: \"kubernetes.io/projected/dea7b38c-5f36-498f-93c4-23e849473cb4-kube-api-access-nzhbg\") pod \"redhat-operators-4sqds\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.951490 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-utilities\") pod \"redhat-operators-4sqds\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.952118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-utilities\") pod \"redhat-operators-4sqds\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.952186 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-catalog-content\") pod \"redhat-operators-4sqds\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.961641 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n87wt"] Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.979022 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:37 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Dec 10 14:34:37 crc kubenswrapper[4727]: [+]process-running ok Dec 10 14:34:37 crc kubenswrapper[4727]: healthz check failed Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.979100 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:37 crc kubenswrapper[4727]: I1210 14:34:37.985961 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzhbg\" (UniqueName: \"kubernetes.io/projected/dea7b38c-5f36-498f-93c4-23e849473cb4-kube-api-access-nzhbg\") pod \"redhat-operators-4sqds\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.118437 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.145153 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.169341 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f8rtv" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.254453 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qn5hx"] Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.478299 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4sqds"] Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.531374 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sqds" event={"ID":"dea7b38c-5f36-498f-93c4-23e849473cb4","Type":"ContainerStarted","Data":"b22f00ea38501b9ae1a14b5a74195a10dfb9e291ba587c71cb9ac1ae8228e8e5"} Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.532000 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-r4rds" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.540532 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-89q8l" event={"ID":"cb1bb43a-d94a-47fb-b22d-060818c735aa","Type":"ContainerStarted","Data":"1a43cea41f860fc33021b692ad5d83663466883036748aeafc92affc28f4226b"} Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.551204 4727 generic.go:334] "Generic (PLEG): container finished" podID="517d22f8-c007-4428-82f1-1fe55445d509" containerID="4da39b19c6fcf3cd388a534956b70c4187f8e3f8bd0f237b77d2b8a791dc123c" exitCode=0 Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.551331 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thvgk" event={"ID":"517d22f8-c007-4428-82f1-1fe55445d509","Type":"ContainerDied","Data":"4da39b19c6fcf3cd388a534956b70c4187f8e3f8bd0f237b77d2b8a791dc123c"} Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.554141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" event={"ID":"2eeb0c6b-fae8-47f7-91d4-42af15045dfe","Type":"ContainerStarted","Data":"edf26df08a80e4e1e4478f8ba9c614ecdca2e58249dfb3ae4d6bb421eed1eed3"} Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.574856 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.575558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qn5hx" event={"ID":"7f245f78-d777-49e5-8bf1-69a6bb04943b","Type":"ContainerStarted","Data":"1ba0fc1a0906cc318120975e44091d59168007c5b896f0cf99ac693cafa20d15"} Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.576198 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8","Type":"ContainerStarted","Data":"e6f9ff030eab48a418511b12d1f34449c0ff846d2e0585f9b770c96cb6e7b8c4"} Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.579119 4727 generic.go:334] "Generic (PLEG): container finished" podID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerID="ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33" exitCode=0 Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.580596 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns2g4" event={"ID":"4da708e0-26ae-4bf4-ab5c-ca793fc6e207","Type":"ContainerDied","Data":"ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33"} Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.614310 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.614258375 podStartE2EDuration="3.614258375s" podCreationTimestamp="2025-12-10 14:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:38.609376204 +0000 UTC m=+182.804150746" watchObservedRunningTime="2025-12-10 14:34:38.614258375 +0000 UTC m=+182.809032917" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.725250 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.726703 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.729389 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.729891 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.739248 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.903535 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0452f9d4-65b2-471a-988b-1dd693a3a691-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0452f9d4-65b2-471a-988b-1dd693a3a691\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.903653 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0452f9d4-65b2-471a-988b-1dd693a3a691-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0452f9d4-65b2-471a-988b-1dd693a3a691\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.932233 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:38 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Dec 10 14:34:38 crc kubenswrapper[4727]: [+]process-running ok Dec 10 14:34:38 crc kubenswrapper[4727]: healthz check failed Dec 10 14:34:38 crc kubenswrapper[4727]: I1210 14:34:38.932335 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.005465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0452f9d4-65b2-471a-988b-1dd693a3a691-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0452f9d4-65b2-471a-988b-1dd693a3a691\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.005593 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0452f9d4-65b2-471a-988b-1dd693a3a691-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0452f9d4-65b2-471a-988b-1dd693a3a691\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.005853 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0452f9d4-65b2-471a-988b-1dd693a3a691-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0452f9d4-65b2-471a-988b-1dd693a3a691\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.026521 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0452f9d4-65b2-471a-988b-1dd693a3a691-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0452f9d4-65b2-471a-988b-1dd693a3a691\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.044859 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.283522 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.594391 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0452f9d4-65b2-471a-988b-1dd693a3a691","Type":"ContainerStarted","Data":"37812be434b39bc0c983b0366a6ac111093778ee6d041383e88a48b27a098e43"} Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.596885 4727 generic.go:334] "Generic (PLEG): container finished" podID="8c613cd0-bcd2-4720-82bd-ae1fd3525bc8" containerID="e6f9ff030eab48a418511b12d1f34449c0ff846d2e0585f9b770c96cb6e7b8c4" exitCode=0 Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.597043 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8","Type":"ContainerDied","Data":"e6f9ff030eab48a418511b12d1f34449c0ff846d2e0585f9b770c96cb6e7b8c4"} Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.626388 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-89q8l" podStartSLOduration=18.626360244 podStartE2EDuration="18.626360244s" podCreationTimestamp="2025-12-10 14:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:39.623215706 +0000 UTC m=+183.817990248" watchObservedRunningTime="2025-12-10 14:34:39.626360244 +0000 UTC m=+183.821134786" Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.934896 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dgbsr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:39 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Dec 10 14:34:39 crc kubenswrapper[4727]: [+]process-running ok Dec 10 14:34:39 crc kubenswrapper[4727]: healthz check failed Dec 10 14:34:39 crc kubenswrapper[4727]: I1210 14:34:39.935349 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dgbsr" podUID="e0975aba-5e6f-47df-9c61-a5b9a447dcc8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.611585 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-nfknc_198563d9-9967-47b7-aa02-c2b5be2d7c4b/cluster-samples-operator/0.log" Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.611942 4727 generic.go:334] "Generic (PLEG): container finished" podID="198563d9-9967-47b7-aa02-c2b5be2d7c4b" containerID="12be6b07cdaa3b4b19211cc25a91aa7440584f8c635708efc8119900386dc763" exitCode=2 Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.612045 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" event={"ID":"198563d9-9967-47b7-aa02-c2b5be2d7c4b","Type":"ContainerDied","Data":"12be6b07cdaa3b4b19211cc25a91aa7440584f8c635708efc8119900386dc763"} Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.612899 4727 scope.go:117] "RemoveContainer" containerID="12be6b07cdaa3b4b19211cc25a91aa7440584f8c635708efc8119900386dc763" Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.619932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0452f9d4-65b2-471a-988b-1dd693a3a691","Type":"ContainerStarted","Data":"e68961145440be2fa3fc063be6474160794a7e34ab83415ecd2ec65743cdb55d"} Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.623030 4727 generic.go:334] "Generic (PLEG): container finished" podID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerID="631422ae118bd8d49ac1116a2d6765d6f2a67870f791ba62045401382771da11" exitCode=0 Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.623136 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qn5hx" event={"ID":"7f245f78-d777-49e5-8bf1-69a6bb04943b","Type":"ContainerDied","Data":"631422ae118bd8d49ac1116a2d6765d6f2a67870f791ba62045401382771da11"} Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.625057 4727 generic.go:334] "Generic (PLEG): container finished" podID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerID="a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d" exitCode=0 Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.625141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sqds" event={"ID":"dea7b38c-5f36-498f-93c4-23e849473cb4","Type":"ContainerDied","Data":"a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d"} Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.626571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" event={"ID":"2eeb0c6b-fae8-47f7-91d4-42af15045dfe","Type":"ContainerStarted","Data":"22753eb65466a644118b319b4bb0574cedc21cacb390a72561e89e5922a187a2"} Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.627021 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.869052 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.886511 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" podStartSLOduration=158.886487883 podStartE2EDuration="2m38.886487883s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:40.659435592 +0000 UTC m=+184.854210134" watchObservedRunningTime="2025-12-10 14:34:40.886487883 +0000 UTC m=+185.081262425" Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.945151 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:40 crc kubenswrapper[4727]: I1210 14:34:40.950379 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dgbsr" Dec 10 14:34:41 crc kubenswrapper[4727]: I1210 14:34:41.047111 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kube-api-access\") pod \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\" (UID: \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\") " Dec 10 14:34:41 crc kubenswrapper[4727]: I1210 14:34:41.047175 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kubelet-dir\") pod \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\" (UID: \"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8\") " Dec 10 14:34:41 crc kubenswrapper[4727]: I1210 14:34:41.047361 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8c613cd0-bcd2-4720-82bd-ae1fd3525bc8" (UID: "8c613cd0-bcd2-4720-82bd-ae1fd3525bc8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:34:41 crc kubenswrapper[4727]: I1210 14:34:41.048027 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:41 crc kubenswrapper[4727]: I1210 14:34:41.057220 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8c613cd0-bcd2-4720-82bd-ae1fd3525bc8" (UID: "8c613cd0-bcd2-4720-82bd-ae1fd3525bc8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:34:41 crc kubenswrapper[4727]: I1210 14:34:41.149329 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c613cd0-bcd2-4720-82bd-ae1fd3525bc8-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:41 crc kubenswrapper[4727]: I1210 14:34:41.637254 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:41 crc kubenswrapper[4727]: I1210 14:34:41.637258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8c613cd0-bcd2-4720-82bd-ae1fd3525bc8","Type":"ContainerDied","Data":"e7395a5b49d9aee8c5bf467a7788c388b8a963f848f101b2805555907316fc4a"} Dec 10 14:34:41 crc kubenswrapper[4727]: I1210 14:34:41.637363 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7395a5b49d9aee8c5bf467a7788c388b8a963f848f101b2805555907316fc4a" Dec 10 14:34:42 crc kubenswrapper[4727]: I1210 14:34:42.648482 4727 generic.go:334] "Generic (PLEG): container finished" podID="0452f9d4-65b2-471a-988b-1dd693a3a691" containerID="e68961145440be2fa3fc063be6474160794a7e34ab83415ecd2ec65743cdb55d" exitCode=0 Dec 10 14:34:42 crc kubenswrapper[4727]: I1210 14:34:42.648576 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0452f9d4-65b2-471a-988b-1dd693a3a691","Type":"ContainerDied","Data":"e68961145440be2fa3fc063be6474160794a7e34ab83415ecd2ec65743cdb55d"} Dec 10 14:34:42 crc kubenswrapper[4727]: I1210 14:34:42.652898 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-nfknc_198563d9-9967-47b7-aa02-c2b5be2d7c4b/cluster-samples-operator/0.log" Dec 10 14:34:42 crc kubenswrapper[4727]: I1210 14:34:42.653195 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nfknc" event={"ID":"198563d9-9967-47b7-aa02-c2b5be2d7c4b","Type":"ContainerStarted","Data":"c8b9569d8fe31c7c4b8e1d9dac795149f8645820d5b3ee46125772bbeba0eaa4"} Dec 10 14:34:43 crc kubenswrapper[4727]: I1210 14:34:43.177418 4727 patch_prober.go:28] interesting pod/console-f9d7485db-swj5s container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 10 14:34:43 crc kubenswrapper[4727]: I1210 14:34:43.177504 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-swj5s" podUID="6d8cde10-5565-4980-a4e2-a30f26707a0e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 10 14:34:43 crc kubenswrapper[4727]: I1210 14:34:43.674994 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:34:43 crc kubenswrapper[4727]: I1210 14:34:43.912306 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:43 crc kubenswrapper[4727]: I1210 14:34:43.912366 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:43 crc kubenswrapper[4727]: I1210 14:34:43.912364 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:43 crc kubenswrapper[4727]: I1210 14:34:43.912431 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.005531 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.095743 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0452f9d4-65b2-471a-988b-1dd693a3a691-kubelet-dir\") pod \"0452f9d4-65b2-471a-988b-1dd693a3a691\" (UID: \"0452f9d4-65b2-471a-988b-1dd693a3a691\") " Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.095848 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0452f9d4-65b2-471a-988b-1dd693a3a691-kube-api-access\") pod \"0452f9d4-65b2-471a-988b-1dd693a3a691\" (UID: \"0452f9d4-65b2-471a-988b-1dd693a3a691\") " Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.096024 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0452f9d4-65b2-471a-988b-1dd693a3a691-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0452f9d4-65b2-471a-988b-1dd693a3a691" (UID: "0452f9d4-65b2-471a-988b-1dd693a3a691"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.096367 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0452f9d4-65b2-471a-988b-1dd693a3a691-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.104165 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0452f9d4-65b2-471a-988b-1dd693a3a691-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0452f9d4-65b2-471a-988b-1dd693a3a691" (UID: "0452f9d4-65b2-471a-988b-1dd693a3a691"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.197677 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0452f9d4-65b2-471a-988b-1dd693a3a691-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.674867 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0452f9d4-65b2-471a-988b-1dd693a3a691","Type":"ContainerDied","Data":"37812be434b39bc0c983b0366a6ac111093778ee6d041383e88a48b27a098e43"} Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.674944 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37812be434b39bc0c983b0366a6ac111093778ee6d041383e88a48b27a098e43" Dec 10 14:34:44 crc kubenswrapper[4727]: I1210 14:34:44.675030 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:53 crc kubenswrapper[4727]: I1210 14:34:53.180894 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:53 crc kubenswrapper[4727]: I1210 14:34:53.188511 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:34:54 crc kubenswrapper[4727]: I1210 14:34:54.043057 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:54 crc kubenswrapper[4727]: I1210 14:34:54.043073 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:54 crc kubenswrapper[4727]: I1210 14:34:54.043174 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:54 crc kubenswrapper[4727]: I1210 14:34:54.043211 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:54 crc kubenswrapper[4727]: I1210 14:34:54.043266 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-mnprj" Dec 10 14:34:54 crc kubenswrapper[4727]: I1210 14:34:54.043816 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:34:54 crc kubenswrapper[4727]: I1210 14:34:54.043871 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:34:54 crc kubenswrapper[4727]: I1210 14:34:54.044323 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"38b5fb2b9d661e9cea02fb04e344ee336818a6a3d1b29f4a84a1a070d9fc1d98"} pod="openshift-console/downloads-7954f5f757-mnprj" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 10 14:34:54 crc kubenswrapper[4727]: I1210 14:34:54.044582 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" containerID="cri-o://38b5fb2b9d661e9cea02fb04e344ee336818a6a3d1b29f4a84a1a070d9fc1d98" gracePeriod=2 Dec 10 14:34:55 crc kubenswrapper[4727]: I1210 14:34:55.770642 4727 generic.go:334] "Generic (PLEG): container finished" podID="d14ab0aa-f244-4668-911a-3a54806f024f" containerID="38b5fb2b9d661e9cea02fb04e344ee336818a6a3d1b29f4a84a1a070d9fc1d98" exitCode=0 Dec 10 14:34:55 crc kubenswrapper[4727]: I1210 14:34:55.770737 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mnprj" event={"ID":"d14ab0aa-f244-4668-911a-3a54806f024f","Type":"ContainerDied","Data":"38b5fb2b9d661e9cea02fb04e344ee336818a6a3d1b29f4a84a1a070d9fc1d98"} Dec 10 14:34:57 crc kubenswrapper[4727]: I1210 14:34:57.389639 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:35:03 crc kubenswrapper[4727]: I1210 14:35:03.910491 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:35:03 crc kubenswrapper[4727]: I1210 14:35:03.911027 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:35:03 crc kubenswrapper[4727]: I1210 14:35:03.939251 4727 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nv97s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:35:03 crc kubenswrapper[4727]: I1210 14:35:03.939333 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 14:35:04 crc kubenswrapper[4727]: I1210 14:35:04.025647 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-srt7t" Dec 10 14:35:07 crc kubenswrapper[4727]: I1210 14:35:07.724472 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:35:07 crc kubenswrapper[4727]: I1210 14:35:07.724571 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.523386 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 14:35:10 crc kubenswrapper[4727]: E1210 14:35:10.523980 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0452f9d4-65b2-471a-988b-1dd693a3a691" containerName="pruner" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.523995 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0452f9d4-65b2-471a-988b-1dd693a3a691" containerName="pruner" Dec 10 14:35:10 crc kubenswrapper[4727]: E1210 14:35:10.524008 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c613cd0-bcd2-4720-82bd-ae1fd3525bc8" containerName="pruner" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.524014 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c613cd0-bcd2-4720-82bd-ae1fd3525bc8" containerName="pruner" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.524132 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c613cd0-bcd2-4720-82bd-ae1fd3525bc8" containerName="pruner" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.524142 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0452f9d4-65b2-471a-988b-1dd693a3a691" containerName="pruner" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.524566 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.527080 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.527157 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.534617 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.577343 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.578018 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.680568 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.680685 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.681353 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.708210 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:10 crc kubenswrapper[4727]: I1210 14:35:10.842175 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:13 crc kubenswrapper[4727]: I1210 14:35:13.912410 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:35:13 crc kubenswrapper[4727]: I1210 14:35:13.912506 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.250443 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.251658 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.253694 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.330210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da95bc09-e230-4593-bb24-4723a883571f-kube-api-access\") pod \"installer-9-crc\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.330281 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.330460 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-var-lock\") pod \"installer-9-crc\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.432349 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da95bc09-e230-4593-bb24-4723a883571f-kube-api-access\") pod \"installer-9-crc\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.432401 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.432501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-var-lock\") pod \"installer-9-crc\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.432698 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.433068 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-var-lock\") pod \"installer-9-crc\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:15 crc kubenswrapper[4727]: I1210 14:35:15.963590 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da95bc09-e230-4593-bb24-4723a883571f-kube-api-access\") pod \"installer-9-crc\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:16 crc kubenswrapper[4727]: I1210 14:35:16.174182 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:23 crc kubenswrapper[4727]: I1210 14:35:23.912606 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:35:23 crc kubenswrapper[4727]: I1210 14:35:23.912966 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:35:33 crc kubenswrapper[4727]: I1210 14:35:33.910846 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:35:33 crc kubenswrapper[4727]: I1210 14:35:33.913276 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:35:37 crc kubenswrapper[4727]: I1210 14:35:37.725014 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:35:37 crc kubenswrapper[4727]: I1210 14:35:37.725533 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:35:37 crc kubenswrapper[4727]: I1210 14:35:37.725695 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:35:37 crc kubenswrapper[4727]: I1210 14:35:37.726554 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:35:37 crc kubenswrapper[4727]: I1210 14:35:37.726624 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f" gracePeriod=600 Dec 10 14:35:42 crc kubenswrapper[4727]: I1210 14:35:42.345428 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f" exitCode=0 Dec 10 14:35:42 crc kubenswrapper[4727]: I1210 14:35:42.345518 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f"} Dec 10 14:35:43 crc kubenswrapper[4727]: I1210 14:35:43.911474 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:35:43 crc kubenswrapper[4727]: I1210 14:35:43.911571 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:35:46 crc kubenswrapper[4727]: E1210 14:35:46.382142 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 10 14:35:46 crc kubenswrapper[4727]: E1210 14:35:46.382519 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4vqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g7k77_openshift-marketplace(d3b0146e-cc5a-48ef-904a-b2d28a6720f3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:46 crc kubenswrapper[4727]: E1210 14:35:46.383751 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g7k77" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" Dec 10 14:35:48 crc kubenswrapper[4727]: E1210 14:35:48.661855 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g7k77" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" Dec 10 14:35:48 crc kubenswrapper[4727]: E1210 14:35:48.762203 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 10 14:35:48 crc kubenswrapper[4727]: E1210 14:35:48.762411 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c98zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ns2g4_openshift-marketplace(4da708e0-26ae-4bf4-ab5c-ca793fc6e207): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:48 crc kubenswrapper[4727]: E1210 14:35:48.763662 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ns2g4" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" Dec 10 14:35:48 crc kubenswrapper[4727]: E1210 14:35:48.798943 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 10 14:35:48 crc kubenswrapper[4727]: E1210 14:35:48.799139 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjdfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-thvgk_openshift-marketplace(517d22f8-c007-4428-82f1-1fe55445d509): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:48 crc kubenswrapper[4727]: E1210 14:35:48.800334 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-thvgk" podUID="517d22f8-c007-4428-82f1-1fe55445d509" Dec 10 14:35:52 crc kubenswrapper[4727]: E1210 14:35:52.247965 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-thvgk" podUID="517d22f8-c007-4428-82f1-1fe55445d509" Dec 10 14:35:52 crc kubenswrapper[4727]: E1210 14:35:52.247979 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ns2g4" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" Dec 10 14:35:52 crc kubenswrapper[4727]: E1210 14:35:52.314719 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 10 14:35:52 crc kubenswrapper[4727]: E1210 14:35:52.314978 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmqgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qn5hx_openshift-marketplace(7f245f78-d777-49e5-8bf1-69a6bb04943b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:52 crc kubenswrapper[4727]: E1210 14:35:52.316305 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qn5hx" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" Dec 10 14:35:52 crc kubenswrapper[4727]: E1210 14:35:52.401588 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 10 14:35:52 crc kubenswrapper[4727]: E1210 14:35:52.401886 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57x4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-s2hk4_openshift-marketplace(44e5c54b-8d4a-4435-bc9e-a93dc0b37e96): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:52 crc kubenswrapper[4727]: E1210 14:35:52.403181 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-s2hk4" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.719349 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qn5hx" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.719469 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-s2hk4" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.816028 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.816466 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzhbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4sqds_openshift-marketplace(dea7b38c-5f36-498f-93c4-23e849473cb4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.816028 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.816605 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tggrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xfkcf_openshift-marketplace(98bd9482-a59d-4e44-ba30-6a0277bcb2ae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.817749 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xfkcf" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.817844 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4sqds" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.839361 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.839830 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc5m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-p5v95_openshift-marketplace(2484515c-1846-4e63-9747-bc6dc81a574c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:53 crc kubenswrapper[4727]: E1210 14:35:53.841352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-p5v95" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" Dec 10 14:35:53 crc kubenswrapper[4727]: I1210 14:35:53.911724 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:35:53 crc kubenswrapper[4727]: I1210 14:35:53.911799 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:35:53 crc kubenswrapper[4727]: I1210 14:35:53.985512 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.254215 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 14:35:54 crc kubenswrapper[4727]: W1210 14:35:54.268634 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podda95bc09_e230_4593_bb24_4723a883571f.slice/crio-f66642fb6a000fda3d49a269689564d0d8252f13a3b24c2cf6bd4f54b4a7676b WatchSource:0}: Error finding container f66642fb6a000fda3d49a269689564d0d8252f13a3b24c2cf6bd4f54b4a7676b: Status 404 returned error can't find the container with id f66642fb6a000fda3d49a269689564d0d8252f13a3b24c2cf6bd4f54b4a7676b Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.515822 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mnprj" event={"ID":"d14ab0aa-f244-4668-911a-3a54806f024f","Type":"ContainerStarted","Data":"7079e02e4bd39fe3d0f4fe2c8861c709835b6ef1b6e45c1c3561114db6236230"} Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.516604 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mnprj" Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.518083 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.518149 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.519614 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"51efd2633926a32a5d87ebb689d788f19863dd8e68c34097ba24f8c6d596b5ea"} Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.521212 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"da95bc09-e230-4593-bb24-4723a883571f","Type":"ContainerStarted","Data":"f66642fb6a000fda3d49a269689564d0d8252f13a3b24c2cf6bd4f54b4a7676b"} Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.523344 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1","Type":"ContainerStarted","Data":"72cda0cbfe8b4a056878b5c75a9e4cd7fce12c501a6963fc3738b4b8b3e4fbe1"} Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.523391 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1","Type":"ContainerStarted","Data":"d0a76aace010f3bb3460e973f7f0e682f9261182b58eff59bd68d5a0b9d9048b"} Dec 10 14:35:54 crc kubenswrapper[4727]: E1210 14:35:54.525728 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-p5v95" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" Dec 10 14:35:54 crc kubenswrapper[4727]: E1210 14:35:54.525761 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xfkcf" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" Dec 10 14:35:54 crc kubenswrapper[4727]: I1210 14:35:54.555190 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=44.555149233 podStartE2EDuration="44.555149233s" podCreationTimestamp="2025-12-10 14:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:35:54.554197997 +0000 UTC m=+258.748972539" watchObservedRunningTime="2025-12-10 14:35:54.555149233 +0000 UTC m=+258.749923775" Dec 10 14:35:55 crc kubenswrapper[4727]: I1210 14:35:55.530802 4727 generic.go:334] "Generic (PLEG): container finished" podID="1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1" containerID="72cda0cbfe8b4a056878b5c75a9e4cd7fce12c501a6963fc3738b4b8b3e4fbe1" exitCode=0 Dec 10 14:35:55 crc kubenswrapper[4727]: I1210 14:35:55.531627 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1","Type":"ContainerDied","Data":"72cda0cbfe8b4a056878b5c75a9e4cd7fce12c501a6963fc3738b4b8b3e4fbe1"} Dec 10 14:35:55 crc kubenswrapper[4727]: I1210 14:35:55.536129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"da95bc09-e230-4593-bb24-4723a883571f","Type":"ContainerStarted","Data":"2b3a0b32b230eca1672c632fb32332a87a2ab8a0e92d2a256d22a96013929f33"} Dec 10 14:35:55 crc kubenswrapper[4727]: I1210 14:35:55.536681 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-mnprj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 10 14:35:55 crc kubenswrapper[4727]: I1210 14:35:55.536724 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mnprj" podUID="d14ab0aa-f244-4668-911a-3a54806f024f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 10 14:35:55 crc kubenswrapper[4727]: I1210 14:35:55.572357 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=40.572334519 podStartE2EDuration="40.572334519s" podCreationTimestamp="2025-12-10 14:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:35:55.571695692 +0000 UTC m=+259.766470234" watchObservedRunningTime="2025-12-10 14:35:55.572334519 +0000 UTC m=+259.767109061" Dec 10 14:35:56 crc kubenswrapper[4727]: I1210 14:35:56.807699 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:56 crc kubenswrapper[4727]: I1210 14:35:56.825283 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kube-api-access\") pod \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\" (UID: \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\") " Dec 10 14:35:56 crc kubenswrapper[4727]: I1210 14:35:56.825409 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kubelet-dir\") pod \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\" (UID: \"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1\") " Dec 10 14:35:56 crc kubenswrapper[4727]: I1210 14:35:56.825764 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1" (UID: "1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:35:56 crc kubenswrapper[4727]: I1210 14:35:56.835276 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1" (UID: "1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:35:56 crc kubenswrapper[4727]: I1210 14:35:56.926208 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:35:56 crc kubenswrapper[4727]: I1210 14:35:56.926238 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:35:57 crc kubenswrapper[4727]: I1210 14:35:57.553745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1","Type":"ContainerDied","Data":"d0a76aace010f3bb3460e973f7f0e682f9261182b58eff59bd68d5a0b9d9048b"} Dec 10 14:35:57 crc kubenswrapper[4727]: I1210 14:35:57.554226 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a76aace010f3bb3460e973f7f0e682f9261182b58eff59bd68d5a0b9d9048b" Dec 10 14:35:57 crc kubenswrapper[4727]: I1210 14:35:57.554368 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:36:02 crc kubenswrapper[4727]: I1210 14:36:02.874637 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nv97s"] Dec 10 14:36:04 crc kubenswrapper[4727]: I1210 14:36:04.000100 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mnprj" Dec 10 14:36:04 crc kubenswrapper[4727]: I1210 14:36:04.628830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7k77" event={"ID":"d3b0146e-cc5a-48ef-904a-b2d28a6720f3","Type":"ContainerStarted","Data":"742fe76861cc16f812ceb6c81196291cce987fcec2ca68f92298517e086c7e67"} Dec 10 14:36:07 crc kubenswrapper[4727]: I1210 14:36:07.742378 4727 generic.go:334] "Generic (PLEG): container finished" podID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerID="742fe76861cc16f812ceb6c81196291cce987fcec2ca68f92298517e086c7e67" exitCode=0 Dec 10 14:36:07 crc kubenswrapper[4727]: I1210 14:36:07.742515 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7k77" event={"ID":"d3b0146e-cc5a-48ef-904a-b2d28a6720f3","Type":"ContainerDied","Data":"742fe76861cc16f812ceb6c81196291cce987fcec2ca68f92298517e086c7e67"} Dec 10 14:36:07 crc kubenswrapper[4727]: I1210 14:36:07.745834 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thvgk" event={"ID":"517d22f8-c007-4428-82f1-1fe55445d509","Type":"ContainerStarted","Data":"b7472b253f28593222f69ba6e121df90ced7362caeae70ec916dea0c0dc9aa2a"} Dec 10 14:36:10 crc kubenswrapper[4727]: I1210 14:36:10.765725 4727 generic.go:334] "Generic (PLEG): container finished" podID="517d22f8-c007-4428-82f1-1fe55445d509" containerID="b7472b253f28593222f69ba6e121df90ced7362caeae70ec916dea0c0dc9aa2a" exitCode=0 Dec 10 14:36:10 crc kubenswrapper[4727]: I1210 14:36:10.765791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thvgk" event={"ID":"517d22f8-c007-4428-82f1-1fe55445d509","Type":"ContainerDied","Data":"b7472b253f28593222f69ba6e121df90ced7362caeae70ec916dea0c0dc9aa2a"} Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.574279 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.574716 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.574758 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.574785 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.577250 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.577577 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.577896 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.587611 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.596976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.602582 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.603253 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.895746 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.902375 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:17.958834 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:36:19 crc kubenswrapper[4727]: I1210 14:36:18.190191 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:36:30 crc kubenswrapper[4727]: I1210 14:36:27.938427 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" containerName="oauth-openshift" containerID="cri-o://c3b9b52e32c30652f33052438c98b4b800e09cd906e1cb1d8c9107bec99ab3d0" gracePeriod=15 Dec 10 14:36:31 crc kubenswrapper[4727]: I1210 14:36:31.129967 4727 generic.go:334] "Generic (PLEG): container finished" podID="aa4939cc-34b3-4562-9798-92d443fb76ca" containerID="c3b9b52e32c30652f33052438c98b4b800e09cd906e1cb1d8c9107bec99ab3d0" exitCode=0 Dec 10 14:36:31 crc kubenswrapper[4727]: I1210 14:36:31.130020 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" event={"ID":"aa4939cc-34b3-4562-9798-92d443fb76ca","Type":"ContainerDied","Data":"c3b9b52e32c30652f33052438c98b4b800e09cd906e1cb1d8c9107bec99ab3d0"} Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.439672 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.440372 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1" containerName="pruner" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.440388 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1" containerName="pruner" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.440572 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb76cfb-9e4c-498e-86a6-2a7d08a3f4f1" containerName="pruner" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.441220 4727 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.441480 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c" gracePeriod=15 Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.441584 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9" gracePeriod=15 Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.441601 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e" gracePeriod=15 Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.441696 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a" gracePeriod=15 Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.441934 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058" gracePeriod=15 Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.441968 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442222 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.442580 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442608 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.442624 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442634 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.442674 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442684 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.442695 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442704 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.442715 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442724 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.442734 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442741 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.442752 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442760 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.442770 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442778 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442953 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442970 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442980 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.442989 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.443001 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.443011 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.443021 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.454613 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.495360 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.573766 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.573815 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.573834 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.573851 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.573875 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.573934 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.573969 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.574002 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.675767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.675860 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.675895 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.675942 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.675970 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676009 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676042 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676091 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676045 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.675983 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676080 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676049 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676088 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676303 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.676450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.787705 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.937370 4727 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nv97s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Dec 10 14:36:32 crc kubenswrapper[4727]: I1210 14:36:32.937555 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Dec 10 14:36:32 crc kubenswrapper[4727]: E1210 14:36:32.938685 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events/oauth-openshift-558db77b4-nv97s.187fe144e39e10ce\": dial tcp 38.102.83.180:6443: connect: connection refused" event=< Dec 10 14:36:32 crc kubenswrapper[4727]: &Event{ObjectMeta:{oauth-openshift-558db77b4-nv97s.187fe144e39e10ce openshift-authentication 27492 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-558db77b4-nv97s,UID:aa4939cc-34b3-4562-9798-92d443fb76ca,APIVersion:v1,ResourceVersion:27222,FieldPath:spec.containers{oauth-openshift},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.12:6443/healthz": dial tcp 10.217.0.12:6443: connect: connection refused Dec 10 14:36:32 crc kubenswrapper[4727]: body: Dec 10 14:36:32 crc kubenswrapper[4727]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 14:34:24 +0000 UTC,LastTimestamp:2025-12-10 14:36:32.937496227 +0000 UTC m=+297.132270799,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 10 14:36:32 crc kubenswrapper[4727]: > Dec 10 14:36:33 crc kubenswrapper[4727]: I1210 14:36:33.144692 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 10 14:36:33 crc kubenswrapper[4727]: I1210 14:36:33.146266 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 14:36:33 crc kubenswrapper[4727]: I1210 14:36:33.147277 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9" exitCode=0 Dec 10 14:36:33 crc kubenswrapper[4727]: I1210 14:36:33.147348 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e" exitCode=0 Dec 10 14:36:33 crc kubenswrapper[4727]: I1210 14:36:33.147370 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058" exitCode=2 Dec 10 14:36:33 crc kubenswrapper[4727]: I1210 14:36:33.147356 4727 scope.go:117] "RemoveContainer" containerID="d6150cb7533ec2b5f116d0f9b23f85501b52273208c74f43b8310b9b14e7770f" Dec 10 14:36:33 crc kubenswrapper[4727]: E1210 14:36:33.296579 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4727]: E1210 14:36:33.297056 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4727]: E1210 14:36:33.297291 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4727]: E1210 14:36:33.297531 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4727]: E1210 14:36:33.297707 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4727]: E1210 14:36:33.297719 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:36:34 crc kubenswrapper[4727]: I1210 14:36:34.157945 4727 generic.go:334] "Generic (PLEG): container finished" podID="da95bc09-e230-4593-bb24-4723a883571f" containerID="2b3a0b32b230eca1672c632fb32332a87a2ab8a0e92d2a256d22a96013929f33" exitCode=0 Dec 10 14:36:34 crc kubenswrapper[4727]: I1210 14:36:34.158026 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"da95bc09-e230-4593-bb24-4723a883571f","Type":"ContainerDied","Data":"2b3a0b32b230eca1672c632fb32332a87a2ab8a0e92d2a256d22a96013929f33"} Dec 10 14:36:34 crc kubenswrapper[4727]: I1210 14:36:34.159115 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:34 crc kubenswrapper[4727]: I1210 14:36:34.160092 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:34 crc kubenswrapper[4727]: I1210 14:36:34.163924 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 14:36:34 crc kubenswrapper[4727]: I1210 14:36:34.164964 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a" exitCode=0 Dec 10 14:36:36 crc kubenswrapper[4727]: I1210 14:36:36.299374 4727 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 10 14:36:36 crc kubenswrapper[4727]: I1210 14:36:36.568393 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4727]: I1210 14:36:36.569399 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4727]: E1210 14:36:36.606805 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4727]: E1210 14:36:36.607049 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4727]: E1210 14:36:36.607225 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4727]: E1210 14:36:36.607573 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4727]: E1210 14:36:36.609044 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4727]: I1210 14:36:36.609118 4727 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 10 14:36:36 crc kubenswrapper[4727]: E1210 14:36:36.609553 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="200ms" Dec 10 14:36:36 crc kubenswrapper[4727]: E1210 14:36:36.810842 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="400ms" Dec 10 14:36:37 crc kubenswrapper[4727]: E1210 14:36:37.211991 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="800ms" Dec 10 14:36:38 crc kubenswrapper[4727]: E1210 14:36:38.012431 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="1.6s" Dec 10 14:36:39 crc kubenswrapper[4727]: E1210 14:36:39.613800 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="3.2s" Dec 10 14:36:40 crc kubenswrapper[4727]: E1210 14:36:40.318436 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events/oauth-openshift-558db77b4-nv97s.187fe144e39e10ce\": dial tcp 38.102.83.180:6443: connect: connection refused" event=< Dec 10 14:36:40 crc kubenswrapper[4727]: &Event{ObjectMeta:{oauth-openshift-558db77b4-nv97s.187fe144e39e10ce openshift-authentication 27492 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-558db77b4-nv97s,UID:aa4939cc-34b3-4562-9798-92d443fb76ca,APIVersion:v1,ResourceVersion:27222,FieldPath:spec.containers{oauth-openshift},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.12:6443/healthz": dial tcp 10.217.0.12:6443: connect: connection refused Dec 10 14:36:40 crc kubenswrapper[4727]: body: Dec 10 14:36:40 crc kubenswrapper[4727]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 14:34:24 +0000 UTC,LastTimestamp:2025-12-10 14:36:32.937496227 +0000 UTC m=+297.132270799,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 10 14:36:40 crc kubenswrapper[4727]: > Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.205681 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.206703 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c" exitCode=0 Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.600445 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.601532 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.601991 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.708596 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-var-lock\") pod \"da95bc09-e230-4593-bb24-4723a883571f\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.708707 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da95bc09-e230-4593-bb24-4723a883571f-kube-api-access\") pod \"da95bc09-e230-4593-bb24-4723a883571f\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.708749 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-kubelet-dir\") pod \"da95bc09-e230-4593-bb24-4723a883571f\" (UID: \"da95bc09-e230-4593-bb24-4723a883571f\") " Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.708799 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-var-lock" (OuterVolumeSpecName: "var-lock") pod "da95bc09-e230-4593-bb24-4723a883571f" (UID: "da95bc09-e230-4593-bb24-4723a883571f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.709010 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "da95bc09-e230-4593-bb24-4723a883571f" (UID: "da95bc09-e230-4593-bb24-4723a883571f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.709477 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.709517 4727 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da95bc09-e230-4593-bb24-4723a883571f-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.714603 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da95bc09-e230-4593-bb24-4723a883571f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "da95bc09-e230-4593-bb24-4723a883571f" (UID: "da95bc09-e230-4593-bb24-4723a883571f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:36:41 crc kubenswrapper[4727]: I1210 14:36:41.810545 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da95bc09-e230-4593-bb24-4723a883571f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.215987 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"da95bc09-e230-4593-bb24-4723a883571f","Type":"ContainerDied","Data":"f66642fb6a000fda3d49a269689564d0d8252f13a3b24c2cf6bd4f54b4a7676b"} Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.216057 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f66642fb6a000fda3d49a269689564d0d8252f13a3b24c2cf6bd4f54b4a7676b" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.216111 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.235555 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.236026 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.674412 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.675234 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.675873 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.676117 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.676273 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.676808 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.677385 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.677711 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.677972 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.678206 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4727]: W1210 14:36:42.751820 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e8c3fc98060d32b9f7025127b3f0af279c929ced4607d87e862daa5995d6e5d7 WatchSource:0}: Error finding container e8c3fc98060d32b9f7025127b3f0af279c929ced4607d87e862daa5995d6e5d7: Status 404 returned error can't find the container with id e8c3fc98060d32b9f7025127b3f0af279c929ced4607d87e862daa5995d6e5d7 Dec 10 14:36:42 crc kubenswrapper[4727]: E1210 14:36:42.815070 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="6.4s" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.823266 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-error\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.823774 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-service-ca\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.823801 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-session\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.823886 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-provider-selection\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824030 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-trusted-ca-bundle\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824060 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-cliconfig\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824092 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-dir\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824116 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k27rf\" (UniqueName: \"kubernetes.io/projected/aa4939cc-34b3-4562-9798-92d443fb76ca-kube-api-access-k27rf\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824179 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-idp-0-file-data\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824219 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-ocp-branding-template\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824293 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-policies\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824340 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824370 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824420 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-router-certs\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824466 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-serving-cert\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.824496 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-login\") pod \"aa4939cc-34b3-4562-9798-92d443fb76ca\" (UID: \"aa4939cc-34b3-4562-9798-92d443fb76ca\") " Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.827210 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.828632 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.829349 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.829429 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.829454 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.829471 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.829510 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.830205 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.845081 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.846890 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.847172 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.848341 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.850477 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.851212 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa4939cc-34b3-4562-9798-92d443fb76ca-kube-api-access-k27rf" (OuterVolumeSpecName: "kube-api-access-k27rf") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "kube-api-access-k27rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.855342 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.856295 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.857798 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "aa4939cc-34b3-4562-9798-92d443fb76ca" (UID: "aa4939cc-34b3-4562-9798-92d443fb76ca"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.926342 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.926620 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.926707 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.926816 4727 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.926921 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k27rf\" (UniqueName: \"kubernetes.io/projected/aa4939cc-34b3-4562-9798-92d443fb76ca-kube-api-access-k27rf\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927030 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927142 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927233 4727 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927322 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927409 4727 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927488 4727 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927563 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927635 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927712 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927833 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.927966 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:42 crc kubenswrapper[4727]: I1210 14:36:42.928055 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa4939cc-34b3-4562-9798-92d443fb76ca-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.228233 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v95" event={"ID":"2484515c-1846-4e63-9747-bc6dc81a574c","Type":"ContainerStarted","Data":"3cf74f698f8d36052f73824074cf75e74267f3ae2756db9e8fceb2c78c632135"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.229481 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.229707 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.229936 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.230149 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.230405 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.242335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qn5hx" event={"ID":"7f245f78-d777-49e5-8bf1-69a6bb04943b","Type":"ContainerStarted","Data":"a3f43c89f04e05e3c4fb1723df7bfac6d2cf2308262d4377cfd218c4b8219df0"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.243763 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.244052 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.244257 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.244450 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.244641 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.244843 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.253669 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns2g4" event={"ID":"4da708e0-26ae-4bf4-ab5c-ca793fc6e207","Type":"ContainerStarted","Data":"819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.254873 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.255265 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.255473 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.255656 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.255830 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.256036 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.256229 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.270304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sqds" event={"ID":"dea7b38c-5f36-498f-93c4-23e849473cb4","Type":"ContainerStarted","Data":"8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.273359 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.273649 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.273872 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.275626 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.283309 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.283810 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.285694 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.286514 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.288727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfkcf" event={"ID":"98bd9482-a59d-4e44-ba30-6a0277bcb2ae","Type":"ContainerStarted","Data":"fd515ccf4f46612e25578ee57df0a511a672c25c95f264db50f31962b5b29a75"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.290123 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.290377 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.290579 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.290790 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.293294 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.293804 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.295247 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.295754 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.296170 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.300097 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.312769 4727 scope.go:117] "RemoveContainer" containerID="6e287c1403ba5d24fe2290cfc869210d460aa40f56da582c711f49e4df0ab4a9" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.312989 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.326205 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7k77" event={"ID":"d3b0146e-cc5a-48ef-904a-b2d28a6720f3","Type":"ContainerStarted","Data":"f41dbd47d6539b9eb3becb8d428d718c1430fa8c7b4672590328bf59d5089c18"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.327462 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.327666 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.328576 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.329007 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.329211 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.329709 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.331148 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.335043 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.335346 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.335618 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.360388 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thvgk" event={"ID":"517d22f8-c007-4428-82f1-1fe55445d509","Type":"ContainerStarted","Data":"b5af219de67553d9df393048ab11ead0c71d1c48718358201ea2e50758d11505"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.361218 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.361818 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.362259 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.362568 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.362676 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e8c3fc98060d32b9f7025127b3f0af279c929ced4607d87e862daa5995d6e5d7"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.362823 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.363052 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.363310 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.366658 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.371123 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.371266 4727 scope.go:117] "RemoveContainer" containerID="e0c1b6169363fecb3dbf047da70b88755513a4c6b54dbda3af98010502bf600e" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.371956 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.372301 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.372546 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2hk4" event={"ID":"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96","Type":"ContainerStarted","Data":"0fdddf070bd12fd4f4aa164653ea641b998391ca75754b86b5bdb93cbe68dfc6"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.384374 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.386095 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.386489 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.386673 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.386830 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.386989 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.387143 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.387341 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.387482 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.387614 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.387749 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.388053 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.388340 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.388497 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.388646 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.388786 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.388950 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.389091 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.389230 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.389368 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.389514 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.389746 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.395581 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" event={"ID":"aa4939cc-34b3-4562-9798-92d443fb76ca","Type":"ContainerDied","Data":"ffcfb8a3d54805344b50a4828af85d8a7dcfa6468f37013d32dbf6de38f50cfd"} Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.395809 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.408147 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.409192 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.409445 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.409674 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.409933 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.410174 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.410394 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.410657 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.410869 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.411106 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.411313 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.411525 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.411735 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.422318 4727 scope.go:117] "RemoveContainer" containerID="360b381dc9829fdd2d94155515c1628e0880f898ade804376267b234b365752a" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.511392 4727 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 10 14:36:43 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1" Netns:"/var/run/netns/30b7ba3c-bf1b-4c15-9630-5b4b44dc90cc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:36:43 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 14:36:43 crc kubenswrapper[4727]: > Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.511535 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 10 14:36:43 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1" Netns:"/var/run/netns/30b7ba3c-bf1b-4c15-9630-5b4b44dc90cc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:36:43 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 14:36:43 crc kubenswrapper[4727]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.511579 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 10 14:36:43 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1" Netns:"/var/run/netns/30b7ba3c-bf1b-4c15-9630-5b4b44dc90cc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:36:43 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 14:36:43 crc kubenswrapper[4727]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.511677 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1\\\" Netns:\\\"/var/run/netns/30b7ba3c-bf1b-4c15-9630-5b4b44dc90cc\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=3abb8aadbccdce914daeadbc7ce6d4e893d500bfc771e4e63c728af6fde041d1;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s\\\": dial tcp 38.102.83.180:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.545088 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:12229d63d4b4250e43152639ed3cbe34d626be27e614b86de239fe3ea658c5bf\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a2c22c72b2c9a2eaa6cbb68af6dc2899312e2d4e38577bf019ce87803118584e\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1219138896},{\\\"names\\\":[],\\\"sizeBytes\\\":1201963846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:be25e28aabd5a6e06b4df55e58fa4be426c96c57e3387969e0070e6058149d04\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e6f1bca5d60a93ec9f9bd8ae305cd4ded3f62b2a51bbfdf59e056ea57c0c5b9f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1154573130},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.548281 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.548889 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.549074 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.549415 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.549452 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.617276 4727 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 10 14:36:43 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1" Netns:"/var/run/netns/378c3d0d-f32d-4ffe-a108-06e193a3456e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:36:43 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 14:36:43 crc kubenswrapper[4727]: > Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.617783 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 10 14:36:43 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1" Netns:"/var/run/netns/378c3d0d-f32d-4ffe-a108-06e193a3456e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:36:43 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 14:36:43 crc kubenswrapper[4727]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.617812 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 10 14:36:43 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1" Netns:"/var/run/netns/378c3d0d-f32d-4ffe-a108-06e193a3456e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:36:43 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 14:36:43 crc kubenswrapper[4727]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.617889 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1\\\" Netns:\\\"/var/run/netns/378c3d0d-f32d-4ffe-a108-06e193a3456e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=87d56cbcb5eef49422eed6fee6ef9b47cdc8381115078cfe0b40f02d88062bc1;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s\\\": dial tcp 38.102.83.180:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.694159 4727 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 10 14:36:43 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03" Netns:"/var/run/netns/0fa3898b-3cea-4a78-9b85-36b812fcfdf3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:36:43 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 14:36:43 crc kubenswrapper[4727]: > Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.694254 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 10 14:36:43 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03" Netns:"/var/run/netns/0fa3898b-3cea-4a78-9b85-36b812fcfdf3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:36:43 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 14:36:43 crc kubenswrapper[4727]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.694280 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 10 14:36:43 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03" Netns:"/var/run/netns/0fa3898b-3cea-4a78-9b85-36b812fcfdf3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.180:6443: connect: connection refused Dec 10 14:36:43 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 14:36:43 crc kubenswrapper[4727]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:36:43 crc kubenswrapper[4727]: E1210 14:36:43.694354 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03\\\" Netns:\\\"/var/run/netns/0fa3898b-3cea-4a78-9b85-36b812fcfdf3\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=543740250db383f7d062ee51af6e671929186335461c0d8b6f7b08bea8b6ad03;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s\\\": dial tcp 38.102.83.180:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.720833 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.721088 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.721276 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.721483 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.721661 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.721824 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.722019 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.722463 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.722645 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.722825 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.723040 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.723212 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.724954 4727 scope.go:117] "RemoveContainer" containerID="64b075e38eda1b4b4312fd7cf90b5fd261eb4da63763c892e8ef71757132d058" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.748965 4727 scope.go:117] "RemoveContainer" containerID="229e6eb73e1156aa5e55c19b11b64a72b14445f881cb808d3b7f9a4d0e05a21c" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.770850 4727 scope.go:117] "RemoveContainer" containerID="0298d22b12a5eee535e2dacfbb0192b64899464e251ff0b07e32ce46318a1d2b" Dec 10 14:36:43 crc kubenswrapper[4727]: I1210 14:36:43.807865 4727 scope.go:117] "RemoveContainer" containerID="c3b9b52e32c30652f33052438c98b4b800e09cd906e1cb1d8c9107bec99ab3d0" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.575344 4727 generic.go:334] "Generic (PLEG): container finished" podID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerID="819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378" exitCode=0 Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.580602 4727 generic.go:334] "Generic (PLEG): container finished" podID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerID="fd515ccf4f46612e25578ee57df0a511a672c25c95f264db50f31962b5b29a75" exitCode=0 Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.582413 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.583757 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns2g4" event={"ID":"4da708e0-26ae-4bf4-ab5c-ca793fc6e207","Type":"ContainerDied","Data":"819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378"} Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.583791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfkcf" event={"ID":"98bd9482-a59d-4e44-ba30-6a0277bcb2ae","Type":"ContainerDied","Data":"fd515ccf4f46612e25578ee57df0a511a672c25c95f264db50f31962b5b29a75"} Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.586740 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.595235 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.595740 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.595857 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a"} Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.595976 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.596329 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.596576 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.596925 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.597188 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.597442 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.597633 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.597972 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.598310 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.598510 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.598756 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.598768 4727 generic.go:334] "Generic (PLEG): container finished" podID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerID="0fdddf070bd12fd4f4aa164653ea641b998391ca75754b86b5bdb93cbe68dfc6" exitCode=0 Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.598783 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2hk4" event={"ID":"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96","Type":"ContainerDied","Data":"0fdddf070bd12fd4f4aa164653ea641b998391ca75754b86b5bdb93cbe68dfc6"} Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.599135 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.599337 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.599513 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.599738 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.599949 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.600249 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.600627 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.600934 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.601281 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.601530 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.601860 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.602107 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.602288 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.602500 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.604198 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.604607 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.604802 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.605004 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.605167 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.607323 4727 generic.go:334] "Generic (PLEG): container finished" podID="2484515c-1846-4e63-9747-bc6dc81a574c" containerID="3cf74f698f8d36052f73824074cf75e74267f3ae2756db9e8fceb2c78c632135" exitCode=0 Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.607369 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v95" event={"ID":"2484515c-1846-4e63-9747-bc6dc81a574c","Type":"ContainerDied","Data":"3cf74f698f8d36052f73824074cf75e74267f3ae2756db9e8fceb2c78c632135"} Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.608407 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.608602 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.608749 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.608916 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.609061 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.609228 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.609376 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.609525 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.609690 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.609889 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.610115 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.646716 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:36:44 crc kubenswrapper[4727]: I1210 14:36:44.646802 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.563134 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.565662 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.566294 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.566587 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.566838 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.567124 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.567397 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.567665 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.567925 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.568177 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.568435 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.568693 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.622030 4727 generic.go:334] "Generic (PLEG): container finished" podID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerID="a3f43c89f04e05e3c4fb1723df7bfac6d2cf2308262d4377cfd218c4b8219df0" exitCode=0 Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.622138 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qn5hx" event={"ID":"7f245f78-d777-49e5-8bf1-69a6bb04943b","Type":"ContainerDied","Data":"a3f43c89f04e05e3c4fb1723df7bfac6d2cf2308262d4377cfd218c4b8219df0"} Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.624359 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.624875 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.625637 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.626433 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.627129 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.627543 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.627792 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.628075 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.628495 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.628801 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.628841 4727 generic.go:334] "Generic (PLEG): container finished" podID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerID="8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01" exitCode=0 Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.628952 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sqds" event={"ID":"dea7b38c-5f36-498f-93c4-23e849473cb4","Type":"ContainerDied","Data":"8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01"} Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.629040 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.629999 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.630328 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.630630 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.631007 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.631326 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.631668 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.632141 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.632461 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.632761 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.633022 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.633300 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.639729 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.639811 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:36:45 crc kubenswrapper[4727]: E1210 14:36:45.640556 4727 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.641499 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:45 crc kubenswrapper[4727]: I1210 14:36:45.975725 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g7k77" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerName="registry-server" probeResult="failure" output=< Dec 10 14:36:45 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 14:36:45 crc kubenswrapper[4727]: > Dec 10 14:36:46 crc kubenswrapper[4727]: W1210 14:36:46.064577 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ca5542e93fad08e2f37aa004339a42c3ac19b4ddcd3ef32a6a49f403274a69ab WatchSource:0}: Error finding container ca5542e93fad08e2f37aa004339a42c3ac19b4ddcd3ef32a6a49f403274a69ab: Status 404 returned error can't find the container with id ca5542e93fad08e2f37aa004339a42c3ac19b4ddcd3ef32a6a49f403274a69ab Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.440037 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.440400 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.549985 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.550715 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.551473 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.552068 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.552380 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.552693 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.553117 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.553583 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.553827 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.554150 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.554500 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.554727 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.568706 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.568971 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.569157 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.569371 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.569572 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.569747 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.569955 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.570159 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.570332 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.570499 4727 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.570672 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.570880 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.637045 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfkcf" event={"ID":"98bd9482-a59d-4e44-ba30-6a0277bcb2ae","Type":"ContainerStarted","Data":"707c387928ee10f70d92f246adb5c0768b549e63c9690ac5e278a2d3b1d2c5e2"} Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.638363 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.638924 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.639399 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.639674 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.639965 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.640221 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.640519 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.640800 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.641059 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.641380 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.641753 4727 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.642374 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.644656 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2hk4" event={"ID":"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96","Type":"ContainerStarted","Data":"cf77e0941c682bdc324f57927bd9ccf89d7e29866d3483de41a842257c13fc7e"} Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.647241 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.647511 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.647744 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.647955 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.648161 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.648357 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.648551 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.648743 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.649419 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.649645 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.649883 4727 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.650138 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.655650 4727 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="271a76a74987142a82f4aa3657ab8c6fc2f7b57c47d732247b518739bff2b322" exitCode=0 Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.655831 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"271a76a74987142a82f4aa3657ab8c6fc2f7b57c47d732247b518739bff2b322"} Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.655873 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ca5542e93fad08e2f37aa004339a42c3ac19b4ddcd3ef32a6a49f403274a69ab"} Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.656293 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.656317 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:36:46 crc kubenswrapper[4727]: E1210 14:36:46.656759 4727 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.656896 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.659312 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.659548 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.659843 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.660246 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.660644 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.660744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v95" event={"ID":"2484515c-1846-4e63-9747-bc6dc81a574c","Type":"ContainerStarted","Data":"3fed7902c2e1129bcc902f5aa5cac74edb31cf0b54952dffbeca558e9dc38656"} Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.660896 4727 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.662943 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.664536 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.664856 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.665384 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.665790 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.666505 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.667678 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.668197 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.668630 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.668812 4727 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.669017 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.669273 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.669503 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.669729 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.670002 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.670282 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.670526 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.673045 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns2g4" event={"ID":"4da708e0-26ae-4bf4-ab5c-ca793fc6e207","Type":"ContainerStarted","Data":"0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47"} Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.677848 4727 status_manager.go:851] "Failed to get status for pod" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" pod="openshift-marketplace/community-operators-xfkcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xfkcf\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.678493 4727 status_manager.go:851] "Failed to get status for pod" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" pod="openshift-marketplace/community-operators-p5v95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p5v95\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.679033 4727 status_manager.go:851] "Failed to get status for pod" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" pod="openshift-marketplace/certified-operators-s2hk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s2hk4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.679506 4727 status_manager.go:851] "Failed to get status for pod" podUID="517d22f8-c007-4428-82f1-1fe55445d509" pod="openshift-marketplace/redhat-marketplace-thvgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-thvgk\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.679674 4727 status_manager.go:851] "Failed to get status for pod" podUID="da95bc09-e230-4593-bb24-4723a883571f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.679832 4727 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.680014 4727 status_manager.go:851] "Failed to get status for pod" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" pod="openshift-marketplace/redhat-operators-4sqds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4sqds\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.680182 4727 status_manager.go:851] "Failed to get status for pod" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" pod="openshift-marketplace/redhat-operators-qn5hx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qn5hx\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.680459 4727 status_manager.go:851] "Failed to get status for pod" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" pod="openshift-authentication/oauth-openshift-558db77b4-nv97s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nv97s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.680632 4727 status_manager.go:851] "Failed to get status for pod" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" pod="openshift-marketplace/certified-operators-g7k77" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7k77\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.681020 4727 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.681741 4727 status_manager.go:851] "Failed to get status for pod" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" pod="openshift-marketplace/redhat-marketplace-ns2g4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ns2g4\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.931009 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:36:46 crc kubenswrapper[4727]: I1210 14:36:46.931102 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:36:47 crc kubenswrapper[4727]: I1210 14:36:47.687877 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"faf7330ec2c7501c3cd8710f1fe049bdf4dd6230080f01094bb5aa2b186f588f"} Dec 10 14:36:47 crc kubenswrapper[4727]: I1210 14:36:47.689269 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"470cedb06705d00dab6a1a0ef592777df4d31d071e315e63acc84ae5b61516cf"} Dec 10 14:36:47 crc kubenswrapper[4727]: I1210 14:36:47.705204 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 10 14:36:47 crc kubenswrapper[4727]: I1210 14:36:47.705288 4727 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="27f54dcc8b8184353685144518200440ca7fe027ce457ae0cbb85f6bac6935fa" exitCode=1 Dec 10 14:36:47 crc kubenswrapper[4727]: I1210 14:36:47.705463 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"27f54dcc8b8184353685144518200440ca7fe027ce457ae0cbb85f6bac6935fa"} Dec 10 14:36:47 crc kubenswrapper[4727]: I1210 14:36:47.706233 4727 scope.go:117] "RemoveContainer" containerID="27f54dcc8b8184353685144518200440ca7fe027ce457ae0cbb85f6bac6935fa" Dec 10 14:36:47 crc kubenswrapper[4727]: I1210 14:36:47.711695 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qn5hx" event={"ID":"7f245f78-d777-49e5-8bf1-69a6bb04943b","Type":"ContainerStarted","Data":"092365009e80e94fe7f922e5b01c4091eecb273e99cce46e57f871abb511385d"} Dec 10 14:36:47 crc kubenswrapper[4727]: I1210 14:36:47.719997 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sqds" event={"ID":"dea7b38c-5f36-498f-93c4-23e849473cb4","Type":"ContainerStarted","Data":"c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9"} Dec 10 14:36:48 crc kubenswrapper[4727]: I1210 14:36:48.119426 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:36:48 crc kubenswrapper[4727]: I1210 14:36:48.119481 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:36:48 crc kubenswrapper[4727]: I1210 14:36:48.188147 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ns2g4" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerName="registry-server" probeResult="failure" output=< Dec 10 14:36:48 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 14:36:48 crc kubenswrapper[4727]: > Dec 10 14:36:48 crc kubenswrapper[4727]: I1210 14:36:48.984661 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:36:49 crc kubenswrapper[4727]: I1210 14:36:49.009285 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1492cc3389393ad982c9bc94679b62bfde69a58164da3c4c1d91b847d8e88b5"} Dec 10 14:36:49 crc kubenswrapper[4727]: I1210 14:36:49.016786 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 10 14:36:49 crc kubenswrapper[4727]: I1210 14:36:49.016991 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"89b69ca9e0a380d26635d6bd98b63b0d4c3907148da6ed28d5f6b34c5274a7f2"} Dec 10 14:36:49 crc kubenswrapper[4727]: I1210 14:36:49.207771 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4sqds" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerName="registry-server" probeResult="failure" output=< Dec 10 14:36:49 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 14:36:49 crc kubenswrapper[4727]: > Dec 10 14:36:51 crc kubenswrapper[4727]: I1210 14:36:51.047848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3d36b53f305f06e88de78b3f70062c7df7d079f0262ae08ed9d560101b14b993"} Dec 10 14:36:52 crc kubenswrapper[4727]: I1210 14:36:52.060562 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"88ea30bbea15ebb3784346438465e11c9731ebea6fc137ba989d0ce2c6bc1549"} Dec 10 14:36:52 crc kubenswrapper[4727]: I1210 14:36:52.060921 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:36:52 crc kubenswrapper[4727]: I1210 14:36:52.061782 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:36:52 crc kubenswrapper[4727]: I1210 14:36:52.061707 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.026652 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.027227 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.079857 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.128982 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.433843 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.434048 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.487995 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.687717 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.725679 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.849122 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.849811 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:36:54 crc kubenswrapper[4727]: I1210 14:36:54.888672 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:36:55 crc kubenswrapper[4727]: I1210 14:36:55.127390 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:36:55 crc kubenswrapper[4727]: I1210 14:36:55.127984 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:36:55 crc kubenswrapper[4727]: I1210 14:36:55.642055 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:55 crc kubenswrapper[4727]: I1210 14:36:55.642107 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:55 crc kubenswrapper[4727]: I1210 14:36:55.646999 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:55 crc kubenswrapper[4727]: I1210 14:36:55.998963 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:36:56 crc kubenswrapper[4727]: I1210 14:36:56.490789 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:36:56 crc kubenswrapper[4727]: I1210 14:36:56.562795 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:36:56 crc kubenswrapper[4727]: I1210 14:36:56.562802 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:36:56 crc kubenswrapper[4727]: I1210 14:36:56.563482 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:36:56 crc kubenswrapper[4727]: I1210 14:36:56.563606 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:36:57 crc kubenswrapper[4727]: I1210 14:36:57.059884 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:36:57 crc kubenswrapper[4727]: I1210 14:36:57.165121 4727 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:57 crc kubenswrapper[4727]: I1210 14:36:57.179800 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:36:57 crc kubenswrapper[4727]: W1210 14:36:57.240072 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d5c32d0b2ac208be6d9671ca02a0593cb6498bfdd7a4e007f592e6795060c5ff WatchSource:0}: Error finding container d5c32d0b2ac208be6d9671ca02a0593cb6498bfdd7a4e007f592e6795060c5ff: Status 404 returned error can't find the container with id d5c32d0b2ac208be6d9671ca02a0593cb6498bfdd7a4e007f592e6795060c5ff Dec 10 14:36:57 crc kubenswrapper[4727]: W1210 14:36:57.285161 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-658b15ffa9fb153fe92fe3bb7121f8a0b99f47d69c557f1ce5cfa09b52823ab4 WatchSource:0}: Error finding container 658b15ffa9fb153fe92fe3bb7121f8a0b99f47d69c557f1ce5cfa09b52823ab4: Status 404 returned error can't find the container with id 658b15ffa9fb153fe92fe3bb7121f8a0b99f47d69c557f1ce5cfa09b52823ab4 Dec 10 14:36:57 crc kubenswrapper[4727]: I1210 14:36:57.578124 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bbd8ac92-43f2-4c36-99ef-2634fa8764ed" Dec 10 14:36:57 crc kubenswrapper[4727]: I1210 14:36:57.680159 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:36:57 crc kubenswrapper[4727]: I1210 14:36:57.680214 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:36:57 crc kubenswrapper[4727]: I1210 14:36:57.862688 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.101886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d5c32d0b2ac208be6d9671ca02a0593cb6498bfdd7a4e007f592e6795060c5ff"} Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.103043 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"658b15ffa9fb153fe92fe3bb7121f8a0b99f47d69c557f1ce5cfa09b52823ab4"} Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.103693 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.103725 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.143430 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.188780 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.241067 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.296455 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bbd8ac92-43f2-4c36-99ef-2634fa8764ed" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.562927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.563968 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.619754 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:36:58 crc kubenswrapper[4727]: I1210 14:36:58.623945 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:36:58 crc kubenswrapper[4727]: W1210 14:36:58.812494 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-1cea285158ee7191c79d0c244501cdec53dbf3120769b3f2db54ed4d1f3c2483 WatchSource:0}: Error finding container 1cea285158ee7191c79d0c244501cdec53dbf3120769b3f2db54ed4d1f3c2483: Status 404 returned error can't find the container with id 1cea285158ee7191c79d0c244501cdec53dbf3120769b3f2db54ed4d1f3c2483 Dec 10 14:36:59 crc kubenswrapper[4727]: I1210 14:36:59.111773 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1cea285158ee7191c79d0c244501cdec53dbf3120769b3f2db54ed4d1f3c2483"} Dec 10 14:36:59 crc kubenswrapper[4727]: I1210 14:36:59.113916 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"83b6134a3a1b062397ed7f091c827cbe30db72e33623f87a9224d93b6a48145d"} Dec 10 14:36:59 crc kubenswrapper[4727]: I1210 14:36:59.115774 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f6986b9b1d7e1846563a56488f40b9fa1424b957708a22e30375a8e46f98791c"} Dec 10 14:36:59 crc kubenswrapper[4727]: I1210 14:36:59.120917 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:37:01 crc kubenswrapper[4727]: I1210 14:37:01.135417 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e4392c7285c8582ed90559c7d36691035f31fbadd49822942a6677321bae4d0e"} Dec 10 14:37:01 crc kubenswrapper[4727]: I1210 14:37:01.135742 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:37:05 crc kubenswrapper[4727]: I1210 14:37:05.163535 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Dec 10 14:37:05 crc kubenswrapper[4727]: I1210 14:37:05.163824 4727 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="83b6134a3a1b062397ed7f091c827cbe30db72e33623f87a9224d93b6a48145d" exitCode=255 Dec 10 14:37:05 crc kubenswrapper[4727]: I1210 14:37:05.163872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"83b6134a3a1b062397ed7f091c827cbe30db72e33623f87a9224d93b6a48145d"} Dec 10 14:37:05 crc kubenswrapper[4727]: I1210 14:37:05.164466 4727 scope.go:117] "RemoveContainer" containerID="83b6134a3a1b062397ed7f091c827cbe30db72e33623f87a9224d93b6a48145d" Dec 10 14:37:06 crc kubenswrapper[4727]: I1210 14:37:06.170870 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Dec 10 14:37:06 crc kubenswrapper[4727]: I1210 14:37:06.171794 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Dec 10 14:37:06 crc kubenswrapper[4727]: I1210 14:37:06.171830 4727 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="c891c53181eb68249621dd456a38145fd677d3217a7df8cb63ae4eb5763d20fc" exitCode=255 Dec 10 14:37:06 crc kubenswrapper[4727]: I1210 14:37:06.171861 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"c891c53181eb68249621dd456a38145fd677d3217a7df8cb63ae4eb5763d20fc"} Dec 10 14:37:06 crc kubenswrapper[4727]: I1210 14:37:06.171894 4727 scope.go:117] "RemoveContainer" containerID="83b6134a3a1b062397ed7f091c827cbe30db72e33623f87a9224d93b6a48145d" Dec 10 14:37:06 crc kubenswrapper[4727]: I1210 14:37:06.172472 4727 scope.go:117] "RemoveContainer" containerID="c891c53181eb68249621dd456a38145fd677d3217a7df8cb63ae4eb5763d20fc" Dec 10 14:37:06 crc kubenswrapper[4727]: E1210 14:37:06.172683 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:37:06 crc kubenswrapper[4727]: I1210 14:37:06.297659 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 10 14:37:06 crc kubenswrapper[4727]: I1210 14:37:06.499974 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 10 14:37:07 crc kubenswrapper[4727]: I1210 14:37:07.179882 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Dec 10 14:37:07 crc kubenswrapper[4727]: I1210 14:37:07.322650 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 10 14:37:07 crc kubenswrapper[4727]: I1210 14:37:07.338417 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 10 14:37:07 crc kubenswrapper[4727]: I1210 14:37:07.621548 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 10 14:37:07 crc kubenswrapper[4727]: I1210 14:37:07.788202 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 10 14:37:07 crc kubenswrapper[4727]: I1210 14:37:07.843596 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 10 14:37:08 crc kubenswrapper[4727]: I1210 14:37:08.273368 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 10 14:37:08 crc kubenswrapper[4727]: I1210 14:37:08.371502 4727 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 10 14:37:08 crc kubenswrapper[4727]: I1210 14:37:08.497733 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 10 14:37:08 crc kubenswrapper[4727]: I1210 14:37:08.686216 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 10 14:37:08 crc kubenswrapper[4727]: I1210 14:37:08.799987 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 10 14:37:08 crc kubenswrapper[4727]: I1210 14:37:08.839191 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 10 14:37:08 crc kubenswrapper[4727]: I1210 14:37:08.860797 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 10 14:37:09 crc kubenswrapper[4727]: I1210 14:37:09.200970 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 10 14:37:09 crc kubenswrapper[4727]: I1210 14:37:09.305385 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 14:37:09 crc kubenswrapper[4727]: I1210 14:37:09.522176 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 10 14:37:09 crc kubenswrapper[4727]: I1210 14:37:09.522277 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 14:37:09 crc kubenswrapper[4727]: I1210 14:37:09.755828 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 10 14:37:09 crc kubenswrapper[4727]: I1210 14:37:09.933203 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 10 14:37:09 crc kubenswrapper[4727]: I1210 14:37:09.967547 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.003678 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.114039 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.153487 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.324592 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.326341 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.483019 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.557186 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.559025 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.596109 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.614022 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.637289 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.674041 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.708837 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.716252 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.725220 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.795621 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.865233 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.887499 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.927726 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.950291 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.964156 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.965141 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 10 14:37:10 crc kubenswrapper[4727]: I1210 14:37:10.996346 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.033671 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.144744 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.161720 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.225614 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.291953 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.370676 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.384840 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.415499 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.562779 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.567786 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.569325 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.602524 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.664174 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.683838 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.686509 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.727755 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.771533 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.812132 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.867608 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.922928 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 10 14:37:11 crc kubenswrapper[4727]: I1210 14:37:11.973226 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.040117 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.050900 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.074799 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.213980 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.272234 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.397935 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.417319 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.431528 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.500679 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.510047 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.543534 4727 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.577779 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.578048 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.591404 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.644342 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.677266 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.772548 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.777674 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.779978 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.830557 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.832526 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.855360 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.924884 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.926105 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 10 14:37:12 crc kubenswrapper[4727]: I1210 14:37:12.965538 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.030143 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.034040 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.200078 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.215895 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.298728 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.321972 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.359700 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.562348 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.563501 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.565081 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.685289 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.718358 4727 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.720361 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p5v95" podStartSLOduration=32.100756897 podStartE2EDuration="2m40.720316883s" podCreationTimestamp="2025-12-10 14:34:33 +0000 UTC" firstStartedPulling="2025-12-10 14:34:37.490128663 +0000 UTC m=+181.684903205" lastFinishedPulling="2025-12-10 14:36:46.109688649 +0000 UTC m=+310.304463191" observedRunningTime="2025-12-10 14:36:56.358038983 +0000 UTC m=+320.552813535" watchObservedRunningTime="2025-12-10 14:37:13.720316883 +0000 UTC m=+337.915091425" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.720609 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4sqds" podStartSLOduration=32.92720069 podStartE2EDuration="2m36.72060202s" podCreationTimestamp="2025-12-10 14:34:37 +0000 UTC" firstStartedPulling="2025-12-10 14:34:42.655987035 +0000 UTC m=+186.850761567" lastFinishedPulling="2025-12-10 14:36:46.449388365 +0000 UTC m=+310.644162897" observedRunningTime="2025-12-10 14:36:56.468096272 +0000 UTC m=+320.662870814" watchObservedRunningTime="2025-12-10 14:37:13.72060202 +0000 UTC m=+337.915376562" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.720718 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qn5hx" podStartSLOduration=32.830606654 podStartE2EDuration="2m36.720713003s" podCreationTimestamp="2025-12-10 14:34:37 +0000 UTC" firstStartedPulling="2025-12-10 14:34:42.656888327 +0000 UTC m=+186.851662879" lastFinishedPulling="2025-12-10 14:36:46.546994686 +0000 UTC m=+310.741769228" observedRunningTime="2025-12-10 14:36:56.503212456 +0000 UTC m=+320.697986998" watchObservedRunningTime="2025-12-10 14:37:13.720713003 +0000 UTC m=+337.915487545" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.721618 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g7k77" podStartSLOduration=34.606929369 podStartE2EDuration="2m39.721611818s" podCreationTimestamp="2025-12-10 14:34:34 +0000 UTC" firstStartedPulling="2025-12-10 14:34:37.425082832 +0000 UTC m=+181.619857374" lastFinishedPulling="2025-12-10 14:36:42.539765281 +0000 UTC m=+306.734539823" observedRunningTime="2025-12-10 14:36:56.228662379 +0000 UTC m=+320.423436931" watchObservedRunningTime="2025-12-10 14:37:13.721611818 +0000 UTC m=+337.916386360" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.722789 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-thvgk" podStartSLOduration=33.602982915 podStartE2EDuration="2m37.72278287s" podCreationTimestamp="2025-12-10 14:34:36 +0000 UTC" firstStartedPulling="2025-12-10 14:34:38.557031578 +0000 UTC m=+182.751806120" lastFinishedPulling="2025-12-10 14:36:42.676831523 +0000 UTC m=+306.871606075" observedRunningTime="2025-12-10 14:36:56.398302696 +0000 UTC m=+320.593077238" watchObservedRunningTime="2025-12-10 14:37:13.72278287 +0000 UTC m=+337.917557432" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.723256 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xfkcf" podStartSLOduration=31.078138906 podStartE2EDuration="2m39.723250543s" podCreationTimestamp="2025-12-10 14:34:34 +0000 UTC" firstStartedPulling="2025-12-10 14:34:37.459415332 +0000 UTC m=+181.654189874" lastFinishedPulling="2025-12-10 14:36:46.104526969 +0000 UTC m=+310.299301511" observedRunningTime="2025-12-10 14:36:56.340889687 +0000 UTC m=+320.535664239" watchObservedRunningTime="2025-12-10 14:37:13.723250543 +0000 UTC m=+337.918025085" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.723441 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ns2g4" podStartSLOduration=30.255444068 podStartE2EDuration="2m37.723436148s" podCreationTimestamp="2025-12-10 14:34:36 +0000 UTC" firstStartedPulling="2025-12-10 14:34:38.5857782 +0000 UTC m=+182.780552742" lastFinishedPulling="2025-12-10 14:36:46.05377028 +0000 UTC m=+310.248544822" observedRunningTime="2025-12-10 14:36:56.326834285 +0000 UTC m=+320.521608827" watchObservedRunningTime="2025-12-10 14:37:13.723436148 +0000 UTC m=+337.918210690" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.724356 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.724350133 podStartE2EDuration="41.724350133s" podCreationTimestamp="2025-12-10 14:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:36:56.439033353 +0000 UTC m=+320.633807895" watchObservedRunningTime="2025-12-10 14:37:13.724350133 +0000 UTC m=+337.919124675" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.724704 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s2hk4" podStartSLOduration=31.108100874 podStartE2EDuration="2m39.724699752s" podCreationTimestamp="2025-12-10 14:34:34 +0000 UTC" firstStartedPulling="2025-12-10 14:34:37.386106687 +0000 UTC m=+181.580881229" lastFinishedPulling="2025-12-10 14:36:46.002705525 +0000 UTC m=+310.197480107" observedRunningTime="2025-12-10 14:36:56.372257259 +0000 UTC m=+320.567031811" watchObservedRunningTime="2025-12-10 14:37:13.724699752 +0000 UTC m=+337.919474294" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.725612 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-nv97s"] Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.725767 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-758c4c8f95-g4hkk","openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.726371 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.726421 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4de57f25-ff3d-4ad5-8a6e-3de1c62f9afa" Dec 10 14:37:13 crc kubenswrapper[4727]: E1210 14:37:13.727125 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da95bc09-e230-4593-bb24-4723a883571f" containerName="installer" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.727165 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="da95bc09-e230-4593-bb24-4723a883571f" containerName="installer" Dec 10 14:37:13 crc kubenswrapper[4727]: E1210 14:37:13.727201 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" containerName="oauth-openshift" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.727211 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" containerName="oauth-openshift" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.727383 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" containerName="oauth-openshift" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.727412 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="da95bc09-e230-4593-bb24-4723a883571f" containerName="installer" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.728236 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.733090 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.733150 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.733305 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.733404 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.733463 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.733577 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.737149 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.737316 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.737526 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.737845 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.738818 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.743865 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.748755 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.749676 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.764475 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.765356 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.765722 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.766706 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.777702 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.789112 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.789256 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.805721 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.805705938 podStartE2EDuration="16.805705938s" podCreationTimestamp="2025-12-10 14:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:37:13.803471037 +0000 UTC m=+337.998245589" watchObservedRunningTime="2025-12-10 14:37:13.805705938 +0000 UTC m=+338.000480480" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841493 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-session\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841542 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841572 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04d234ca-d1ac-4fda-9116-aa12950ae21c-audit-dir\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841595 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841617 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841779 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-template-error\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841830 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-audit-policies\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841867 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-service-ca\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841925 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-template-login\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.841979 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.842017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtgdv\" (UniqueName: \"kubernetes.io/projected/04d234ca-d1ac-4fda-9116-aa12950ae21c-kube-api-access-xtgdv\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.842045 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-router-certs\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.842072 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.842104 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.851245 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.869496 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.901431 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943470 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtgdv\" (UniqueName: \"kubernetes.io/projected/04d234ca-d1ac-4fda-9116-aa12950ae21c-kube-api-access-xtgdv\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943498 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-router-certs\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943533 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943575 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943610 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-session\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943636 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943667 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04d234ca-d1ac-4fda-9116-aa12950ae21c-audit-dir\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943698 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-template-error\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943801 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-audit-policies\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943829 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-service-ca\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943790 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04d234ca-d1ac-4fda-9116-aa12950ae21c-audit-dir\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.943874 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-template-login\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.944735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-audit-policies\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.944743 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.945104 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-service-ca\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.945470 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.949006 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-session\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.949267 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.949440 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-template-error\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.949607 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-template-login\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.950377 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.950616 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.951972 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.959146 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04d234ca-d1ac-4fda-9116-aa12950ae21c-v4-0-config-system-router-certs\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.966940 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 10 14:37:13 crc kubenswrapper[4727]: I1210 14:37:13.997473 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtgdv\" (UniqueName: \"kubernetes.io/projected/04d234ca-d1ac-4fda-9116-aa12950ae21c-kube-api-access-xtgdv\") pod \"oauth-openshift-758c4c8f95-g4hkk\" (UID: \"04d234ca-d1ac-4fda-9116-aa12950ae21c\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.025014 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.061796 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.067008 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.132381 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.157098 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.157804 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.213740 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-758c4c8f95-g4hkk"] Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.273210 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.300594 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.308925 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.437748 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.438181 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.485863 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.518423 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-758c4c8f95-g4hkk"] Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.537861 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.562173 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.571235 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa4939cc-34b3-4562-9798-92d443fb76ca" path="/var/lib/kubelet/pods/aa4939cc-34b3-4562-9798-92d443fb76ca/volumes" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.578993 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.607717 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.706828 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.810867 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.830183 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.861423 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.911168 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 14:37:14 crc kubenswrapper[4727]: I1210 14:37:14.991201 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.006725 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.234462 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" event={"ID":"04d234ca-d1ac-4fda-9116-aa12950ae21c","Type":"ContainerStarted","Data":"b0bb423388b6def06ab97dfc182f7d7e6ad7461e24747bcbfe23c757897d38c8"} Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.234571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" event={"ID":"04d234ca-d1ac-4fda-9116-aa12950ae21c","Type":"ContainerStarted","Data":"c4827ff47dd331d9338a6d16fc3d5b1905b69f1d92fe1fa5ea128913feec7311"} Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.234603 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.270253 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" podStartSLOduration=73.270232453 podStartE2EDuration="1m13.270232453s" podCreationTimestamp="2025-12-10 14:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:37:15.26642319 +0000 UTC m=+339.461197742" watchObservedRunningTime="2025-12-10 14:37:15.270232453 +0000 UTC m=+339.465006995" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.272734 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.326030 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-758c4c8f95-g4hkk" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.335050 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.496633 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.528210 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.676020 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.679539 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.716804 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.740494 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.864724 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.881475 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.916725 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 10 14:37:15 crc kubenswrapper[4727]: I1210 14:37:15.998553 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.029340 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.049602 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.092328 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.127770 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.179473 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.211564 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.311724 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.312781 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.312878 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.366069 4727 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.374235 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.442595 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.476858 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.479344 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.637479 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.653159 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 10 14:37:16 crc kubenswrapper[4727]: I1210 14:37:16.750401 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.108870 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.109631 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.145381 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.297827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.298214 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.298221 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.547686 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.689047 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.694828 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.793207 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.810665 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.878290 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.948575 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 10 14:37:17 crc kubenswrapper[4727]: I1210 14:37:17.966858 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.172142 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.192307 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.227809 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.283809 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.295870 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.373985 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.446465 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.502840 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.506253 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.563116 4727 scope.go:117] "RemoveContainer" containerID="c891c53181eb68249621dd456a38145fd677d3217a7df8cb63ae4eb5763d20fc" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.602895 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.625610 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.633965 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.640267 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.676542 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.788684 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.808280 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.873491 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 10 14:37:18 crc kubenswrapper[4727]: I1210 14:37:18.930514 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.026576 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.029651 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.065283 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.081064 4727 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.081584 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a" gracePeriod=5 Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.166885 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.224238 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.262075 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.327556 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.327611 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1f54c92f202b06d09d640909bcc9c053843abadda58219f56b5b74266c814056"} Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.371186 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.412011 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.478130 4727 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.523594 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.646173 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.676073 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.787883 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 10 14:37:19 crc kubenswrapper[4727]: I1210 14:37:19.970586 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.059880 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.144401 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.174685 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.330771 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.362156 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.416757 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.431767 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.457401 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.632467 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.702260 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.742807 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.873318 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.939655 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 10 14:37:20 crc kubenswrapper[4727]: I1210 14:37:20.954188 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 10 14:37:21 crc kubenswrapper[4727]: I1210 14:37:21.048653 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 10 14:37:21 crc kubenswrapper[4727]: I1210 14:37:21.089667 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 10 14:37:21 crc kubenswrapper[4727]: I1210 14:37:21.456025 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 10 14:37:21 crc kubenswrapper[4727]: I1210 14:37:21.456219 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 10 14:37:21 crc kubenswrapper[4727]: I1210 14:37:21.549012 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 10 14:37:21 crc kubenswrapper[4727]: I1210 14:37:21.551086 4727 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 10 14:37:21 crc kubenswrapper[4727]: I1210 14:37:21.695011 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 10 14:37:21 crc kubenswrapper[4727]: I1210 14:37:21.971499 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 10 14:37:21 crc kubenswrapper[4727]: I1210 14:37:21.994182 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 10 14:37:22 crc kubenswrapper[4727]: I1210 14:37:22.040595 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 10 14:37:22 crc kubenswrapper[4727]: I1210 14:37:22.101738 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 10 14:37:22 crc kubenswrapper[4727]: I1210 14:37:22.194549 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 10 14:37:22 crc kubenswrapper[4727]: I1210 14:37:22.375367 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 10 14:37:22 crc kubenswrapper[4727]: I1210 14:37:22.379999 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 10 14:37:22 crc kubenswrapper[4727]: I1210 14:37:22.717837 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 10 14:37:22 crc kubenswrapper[4727]: I1210 14:37:22.972468 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.208274 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.208387 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.391417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.391549 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.391625 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.391808 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.391758 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.391933 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.391840 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.392020 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.392115 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.392563 4727 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.392606 4727 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.392627 4727 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.392639 4727 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.407227 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.486768 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.486829 4727 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a" exitCode=137 Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.486942 4727 scope.go:117] "RemoveContainer" containerID="4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.486969 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.493816 4727 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.504517 4727 scope.go:117] "RemoveContainer" containerID="4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a" Dec 10 14:37:24 crc kubenswrapper[4727]: E1210 14:37:24.505704 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a\": container with ID starting with 4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a not found: ID does not exist" containerID="4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.505757 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a"} err="failed to get container status \"4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a\": rpc error: code = NotFound desc = could not find container \"4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a\": container with ID starting with 4a218f555195c98fba5305726a0d461871638ac42448cec06c8a9402a4f7fe9a not found: ID does not exist" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.879787 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.880367 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.906183 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.906234 4727 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="b33abfa2-6798-419e-af38-47425a249634" Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.909794 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:37:24 crc kubenswrapper[4727]: I1210 14:37:24.909832 4727 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="b33abfa2-6798-419e-af38-47425a249634" Dec 10 14:37:35 crc kubenswrapper[4727]: I1210 14:37:35.742267 4727 generic.go:334] "Generic (PLEG): container finished" podID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerID="271c82a18c9ea81712e271d706eaac8c5afdc8a0019476be6ae6c9b343efe524" exitCode=0 Dec 10 14:37:35 crc kubenswrapper[4727]: I1210 14:37:35.742369 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" event={"ID":"521cc0a4-1afa-4ef6-bdd6-37c60f87273f","Type":"ContainerDied","Data":"271c82a18c9ea81712e271d706eaac8c5afdc8a0019476be6ae6c9b343efe524"} Dec 10 14:37:35 crc kubenswrapper[4727]: I1210 14:37:35.743454 4727 scope.go:117] "RemoveContainer" containerID="271c82a18c9ea81712e271d706eaac8c5afdc8a0019476be6ae6c9b343efe524" Dec 10 14:37:36 crc kubenswrapper[4727]: I1210 14:37:36.750524 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" event={"ID":"521cc0a4-1afa-4ef6-bdd6-37c60f87273f","Type":"ContainerStarted","Data":"52e4d345e54a2ebb19127773b70b7376a769e901f187f0514077d94534751e6e"} Dec 10 14:37:36 crc kubenswrapper[4727]: I1210 14:37:36.751271 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:37:36 crc kubenswrapper[4727]: I1210 14:37:36.753513 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:37:37 crc kubenswrapper[4727]: I1210 14:37:37.899436 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:37:49 crc kubenswrapper[4727]: I1210 14:37:49.623527 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tzznn"] Dec 10 14:37:49 crc kubenswrapper[4727]: I1210 14:37:49.624272 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" podUID="f90473e8-86a2-4a1c-aadf-31d286ed0f21" containerName="controller-manager" containerID="cri-o://6da90353bb68a06771f43ac5133c41e6bae7d398c9df8133a2517f16a8f5a23c" gracePeriod=30 Dec 10 14:37:49 crc kubenswrapper[4727]: I1210 14:37:49.674062 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk"] Dec 10 14:37:49 crc kubenswrapper[4727]: I1210 14:37:49.674354 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" podUID="2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" containerName="route-controller-manager" containerID="cri-o://85afa56485e59cf4c3d0b94215f6de2978430d1ee236ab646c04d05363b69987" gracePeriod=30 Dec 10 14:37:49 crc kubenswrapper[4727]: I1210 14:37:49.825407 4727 generic.go:334] "Generic (PLEG): container finished" podID="2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" containerID="85afa56485e59cf4c3d0b94215f6de2978430d1ee236ab646c04d05363b69987" exitCode=0 Dec 10 14:37:49 crc kubenswrapper[4727]: I1210 14:37:49.825465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" event={"ID":"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7","Type":"ContainerDied","Data":"85afa56485e59cf4c3d0b94215f6de2978430d1ee236ab646c04d05363b69987"} Dec 10 14:37:49 crc kubenswrapper[4727]: I1210 14:37:49.827287 4727 generic.go:334] "Generic (PLEG): container finished" podID="f90473e8-86a2-4a1c-aadf-31d286ed0f21" containerID="6da90353bb68a06771f43ac5133c41e6bae7d398c9df8133a2517f16a8f5a23c" exitCode=0 Dec 10 14:37:49 crc kubenswrapper[4727]: I1210 14:37:49.827312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" event={"ID":"f90473e8-86a2-4a1c-aadf-31d286ed0f21","Type":"ContainerDied","Data":"6da90353bb68a06771f43ac5133c41e6bae7d398c9df8133a2517f16a8f5a23c"} Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.010405 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.066708 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.172624 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-proxy-ca-bundles\") pod \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.172999 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f90473e8-86a2-4a1c-aadf-31d286ed0f21-serving-cert\") pod \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.173144 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-config\") pod \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.173808 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f90473e8-86a2-4a1c-aadf-31d286ed0f21" (UID: "f90473e8-86a2-4a1c-aadf-31d286ed0f21"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.173859 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-config" (OuterVolumeSpecName: "config") pod "f90473e8-86a2-4a1c-aadf-31d286ed0f21" (UID: "f90473e8-86a2-4a1c-aadf-31d286ed0f21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.174229 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-client-ca\") pod \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.174394 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrsp\" (UniqueName: \"kubernetes.io/projected/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-kube-api-access-qgrsp\") pod \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.174714 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-client-ca" (OuterVolumeSpecName: "client-ca") pod "f90473e8-86a2-4a1c-aadf-31d286ed0f21" (UID: "f90473e8-86a2-4a1c-aadf-31d286ed0f21"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.174880 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhv9c\" (UniqueName: \"kubernetes.io/projected/f90473e8-86a2-4a1c-aadf-31d286ed0f21-kube-api-access-jhv9c\") pod \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\" (UID: \"f90473e8-86a2-4a1c-aadf-31d286ed0f21\") " Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.175464 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-serving-cert\") pod \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.175721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-client-ca\") pod \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.175862 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-config\") pod \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\" (UID: \"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7\") " Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.176350 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.176446 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.176583 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f90473e8-86a2-4a1c-aadf-31d286ed0f21-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.176532 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" (UID: "2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.176753 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-config" (OuterVolumeSpecName: "config") pod "2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" (UID: "2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.179253 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f90473e8-86a2-4a1c-aadf-31d286ed0f21-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f90473e8-86a2-4a1c-aadf-31d286ed0f21" (UID: "f90473e8-86a2-4a1c-aadf-31d286ed0f21"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.179324 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" (UID: "2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.179343 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-kube-api-access-qgrsp" (OuterVolumeSpecName: "kube-api-access-qgrsp") pod "2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" (UID: "2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7"). InnerVolumeSpecName "kube-api-access-qgrsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.179515 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f90473e8-86a2-4a1c-aadf-31d286ed0f21-kube-api-access-jhv9c" (OuterVolumeSpecName: "kube-api-access-jhv9c") pod "f90473e8-86a2-4a1c-aadf-31d286ed0f21" (UID: "f90473e8-86a2-4a1c-aadf-31d286ed0f21"). InnerVolumeSpecName "kube-api-access-jhv9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.278325 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f90473e8-86a2-4a1c-aadf-31d286ed0f21-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.278374 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgrsp\" (UniqueName: \"kubernetes.io/projected/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-kube-api-access-qgrsp\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.278388 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhv9c\" (UniqueName: \"kubernetes.io/projected/f90473e8-86a2-4a1c-aadf-31d286ed0f21-kube-api-access-jhv9c\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.278399 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.278410 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.278421 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.833298 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" event={"ID":"2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7","Type":"ContainerDied","Data":"1637340f4d6e687faf64cb6b56deee020217ee2ff3a27ce6c08cbcf1bb1fc671"} Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.833410 4727 scope.go:117] "RemoveContainer" containerID="85afa56485e59cf4c3d0b94215f6de2978430d1ee236ab646c04d05363b69987" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.833328 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.924753 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" event={"ID":"f90473e8-86a2-4a1c-aadf-31d286ed0f21","Type":"ContainerDied","Data":"f2ac23ef173239bba680cdd6c8d8d693677cf05a1c0d73e21ece98a5d8a12e9c"} Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.925168 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tzznn" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.939108 4727 scope.go:117] "RemoveContainer" containerID="6da90353bb68a06771f43ac5133c41e6bae7d398c9df8133a2517f16a8f5a23c" Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.946156 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk"] Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.949673 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q8jzk"] Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.958831 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tzznn"] Dec 10 14:37:50 crc kubenswrapper[4727]: I1210 14:37:50.962487 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tzznn"] Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.536260 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf"] Dec 10 14:37:51 crc kubenswrapper[4727]: E1210 14:37:51.536841 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90473e8-86a2-4a1c-aadf-31d286ed0f21" containerName="controller-manager" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.536864 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90473e8-86a2-4a1c-aadf-31d286ed0f21" containerName="controller-manager" Dec 10 14:37:51 crc kubenswrapper[4727]: E1210 14:37:51.536883 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.536890 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 14:37:51 crc kubenswrapper[4727]: E1210 14:37:51.536918 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" containerName="route-controller-manager" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.536927 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" containerName="route-controller-manager" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.537096 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.537111 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f90473e8-86a2-4a1c-aadf-31d286ed0f21" containerName="controller-manager" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.537122 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" containerName="route-controller-manager" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.537617 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.542764 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d7964759b-hkh6h"] Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.543565 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.543577 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.543802 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.543666 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.543947 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.543970 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.545114 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.547778 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.547934 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.548203 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.548487 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.548815 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.549180 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.552940 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf"] Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.557551 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.558829 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7964759b-hkh6h"] Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.734459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-config\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.734703 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-config\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.734985 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-serving-cert\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.735208 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-proxy-ca-bundles\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.735303 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-client-ca\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.735383 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-client-ca\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.735483 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rldp\" (UniqueName: \"kubernetes.io/projected/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-kube-api-access-8rldp\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.735582 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3119559f-cafa-42c6-8b54-cc1c653bf3f3-serving-cert\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.735787 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2sv\" (UniqueName: \"kubernetes.io/projected/3119559f-cafa-42c6-8b54-cc1c653bf3f3-kube-api-access-jt2sv\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.837263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt2sv\" (UniqueName: \"kubernetes.io/projected/3119559f-cafa-42c6-8b54-cc1c653bf3f3-kube-api-access-jt2sv\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.837339 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-config\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.837444 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-config\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.837482 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-serving-cert\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.837520 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-proxy-ca-bundles\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.837558 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-client-ca\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.837592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-client-ca\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.837621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rldp\" (UniqueName: \"kubernetes.io/projected/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-kube-api-access-8rldp\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.837670 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3119559f-cafa-42c6-8b54-cc1c653bf3f3-serving-cert\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.839154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-client-ca\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.839288 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-proxy-ca-bundles\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.839300 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-config\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.839791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-config\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.841791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-client-ca\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.846689 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3119559f-cafa-42c6-8b54-cc1c653bf3f3-serving-cert\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.847047 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-serving-cert\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.855540 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt2sv\" (UniqueName: \"kubernetes.io/projected/3119559f-cafa-42c6-8b54-cc1c653bf3f3-kube-api-access-jt2sv\") pod \"route-controller-manager-847b99569b-bpwwf\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.858827 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rldp\" (UniqueName: \"kubernetes.io/projected/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-kube-api-access-8rldp\") pod \"controller-manager-d7964759b-hkh6h\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.863513 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:51 crc kubenswrapper[4727]: I1210 14:37:51.869469 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.143418 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7964759b-hkh6h"] Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.310101 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf"] Dec 10 14:37:52 crc kubenswrapper[4727]: W1210 14:37:52.316832 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3119559f_cafa_42c6_8b54_cc1c653bf3f3.slice/crio-034e42b00b559fb336356850dc681a40d5ba8403392747a2c3cccbebd1f12f86 WatchSource:0}: Error finding container 034e42b00b559fb336356850dc681a40d5ba8403392747a2c3cccbebd1f12f86: Status 404 returned error can't find the container with id 034e42b00b559fb336356850dc681a40d5ba8403392747a2c3cccbebd1f12f86 Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.569452 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7" path="/var/lib/kubelet/pods/2ff678b2-9f2a-4fb1-b3c4-ec0d0df622a7/volumes" Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.570146 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f90473e8-86a2-4a1c-aadf-31d286ed0f21" path="/var/lib/kubelet/pods/f90473e8-86a2-4a1c-aadf-31d286ed0f21/volumes" Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.942770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" event={"ID":"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e","Type":"ContainerStarted","Data":"ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba"} Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.943290 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" event={"ID":"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e","Type":"ContainerStarted","Data":"82c8e1c14cc58f97cc7bed7227a0fbdcaa57d87b067a20156fbb98aa3764230d"} Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.943872 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.945392 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" event={"ID":"3119559f-cafa-42c6-8b54-cc1c653bf3f3","Type":"ContainerStarted","Data":"a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137"} Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.945422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" event={"ID":"3119559f-cafa-42c6-8b54-cc1c653bf3f3","Type":"ContainerStarted","Data":"034e42b00b559fb336356850dc681a40d5ba8403392747a2c3cccbebd1f12f86"} Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.945666 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.948306 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.969440 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" podStartSLOduration=3.969376857 podStartE2EDuration="3.969376857s" podCreationTimestamp="2025-12-10 14:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:37:52.967243448 +0000 UTC m=+377.162018000" watchObservedRunningTime="2025-12-10 14:37:52.969376857 +0000 UTC m=+377.164151399" Dec 10 14:37:52 crc kubenswrapper[4727]: I1210 14:37:52.989376 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" podStartSLOduration=3.989354881 podStartE2EDuration="3.989354881s" podCreationTimestamp="2025-12-10 14:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:37:52.988037755 +0000 UTC m=+377.182812297" watchObservedRunningTime="2025-12-10 14:37:52.989354881 +0000 UTC m=+377.184129423" Dec 10 14:37:53 crc kubenswrapper[4727]: I1210 14:37:53.192745 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:57 crc kubenswrapper[4727]: I1210 14:37:57.914592 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7964759b-hkh6h"] Dec 10 14:37:57 crc kubenswrapper[4727]: I1210 14:37:57.915393 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" podUID="9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" containerName="controller-manager" containerID="cri-o://ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba" gracePeriod=30 Dec 10 14:37:57 crc kubenswrapper[4727]: I1210 14:37:57.950050 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf"] Dec 10 14:37:57 crc kubenswrapper[4727]: I1210 14:37:57.950253 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" podUID="3119559f-cafa-42c6-8b54-cc1c653bf3f3" containerName="route-controller-manager" containerID="cri-o://a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137" gracePeriod=30 Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.547994 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.719950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3119559f-cafa-42c6-8b54-cc1c653bf3f3-serving-cert\") pod \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.720084 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-client-ca\") pod \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.720120 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt2sv\" (UniqueName: \"kubernetes.io/projected/3119559f-cafa-42c6-8b54-cc1c653bf3f3-kube-api-access-jt2sv\") pod \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.720314 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-config\") pod \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\" (UID: \"3119559f-cafa-42c6-8b54-cc1c653bf3f3\") " Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.720979 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-client-ca" (OuterVolumeSpecName: "client-ca") pod "3119559f-cafa-42c6-8b54-cc1c653bf3f3" (UID: "3119559f-cafa-42c6-8b54-cc1c653bf3f3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.721146 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-config" (OuterVolumeSpecName: "config") pod "3119559f-cafa-42c6-8b54-cc1c653bf3f3" (UID: "3119559f-cafa-42c6-8b54-cc1c653bf3f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.721551 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.721603 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3119559f-cafa-42c6-8b54-cc1c653bf3f3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.726327 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3119559f-cafa-42c6-8b54-cc1c653bf3f3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3119559f-cafa-42c6-8b54-cc1c653bf3f3" (UID: "3119559f-cafa-42c6-8b54-cc1c653bf3f3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.726450 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3119559f-cafa-42c6-8b54-cc1c653bf3f3-kube-api-access-jt2sv" (OuterVolumeSpecName: "kube-api-access-jt2sv") pod "3119559f-cafa-42c6-8b54-cc1c653bf3f3" (UID: "3119559f-cafa-42c6-8b54-cc1c653bf3f3"). InnerVolumeSpecName "kube-api-access-jt2sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.822823 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3119559f-cafa-42c6-8b54-cc1c653bf3f3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:58 crc kubenswrapper[4727]: I1210 14:37:58.822865 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt2sv\" (UniqueName: \"kubernetes.io/projected/3119559f-cafa-42c6-8b54-cc1c653bf3f3-kube-api-access-jt2sv\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.002207 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.126501 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-serving-cert\") pod \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.126538 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-proxy-ca-bundles\") pod \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.126665 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rldp\" (UniqueName: \"kubernetes.io/projected/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-kube-api-access-8rldp\") pod \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.126688 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-config\") pod \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.126712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-client-ca\") pod \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\" (UID: \"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e\") " Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.127652 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-client-ca" (OuterVolumeSpecName: "client-ca") pod "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" (UID: "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.127678 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" (UID: "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.127936 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-config" (OuterVolumeSpecName: "config") pod "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" (UID: "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.132714 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" (UID: "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.132787 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-kube-api-access-8rldp" (OuterVolumeSpecName: "kube-api-access-8rldp") pod "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" (UID: "9c4bd0dd-f59b-41a8-9a8b-28185ea5203e"). InnerVolumeSpecName "kube-api-access-8rldp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.135548 4727 generic.go:334] "Generic (PLEG): container finished" podID="9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" containerID="ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba" exitCode=0 Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.135606 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" event={"ID":"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e","Type":"ContainerDied","Data":"ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba"} Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.135664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" event={"ID":"9c4bd0dd-f59b-41a8-9a8b-28185ea5203e","Type":"ContainerDied","Data":"82c8e1c14cc58f97cc7bed7227a0fbdcaa57d87b067a20156fbb98aa3764230d"} Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.135660 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7964759b-hkh6h" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.135687 4727 scope.go:117] "RemoveContainer" containerID="ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.137365 4727 generic.go:334] "Generic (PLEG): container finished" podID="3119559f-cafa-42c6-8b54-cc1c653bf3f3" containerID="a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137" exitCode=0 Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.137408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" event={"ID":"3119559f-cafa-42c6-8b54-cc1c653bf3f3","Type":"ContainerDied","Data":"a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137"} Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.137419 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.137437 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf" event={"ID":"3119559f-cafa-42c6-8b54-cc1c653bf3f3","Type":"ContainerDied","Data":"034e42b00b559fb336356850dc681a40d5ba8403392747a2c3cccbebd1f12f86"} Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.160293 4727 scope.go:117] "RemoveContainer" containerID="ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba" Dec 10 14:37:59 crc kubenswrapper[4727]: E1210 14:37:59.161493 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba\": container with ID starting with ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba not found: ID does not exist" containerID="ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.161604 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba"} err="failed to get container status \"ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba\": rpc error: code = NotFound desc = could not find container \"ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba\": container with ID starting with ab390df16d98ec24c9bfd9db10f000099dee8d22abed492c1cc93166d2315cba not found: ID does not exist" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.161703 4727 scope.go:117] "RemoveContainer" containerID="a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.175681 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf"] Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.183103 4727 scope.go:117] "RemoveContainer" containerID="a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137" Dec 10 14:37:59 crc kubenswrapper[4727]: E1210 14:37:59.184404 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137\": container with ID starting with a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137 not found: ID does not exist" containerID="a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.184532 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137"} err="failed to get container status \"a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137\": rpc error: code = NotFound desc = could not find container \"a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137\": container with ID starting with a7c111dc7c373192a790f0cd02d84b6758a2b206aa91d0569e32f9d728484137 not found: ID does not exist" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.184578 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-847b99569b-bpwwf"] Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.188958 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7964759b-hkh6h"] Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.201386 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d7964759b-hkh6h"] Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.228665 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rldp\" (UniqueName: \"kubernetes.io/projected/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-kube-api-access-8rldp\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.228716 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.228730 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.228744 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.228757 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.547247 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-ddvqr"] Dec 10 14:37:59 crc kubenswrapper[4727]: E1210 14:37:59.548582 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" containerName="controller-manager" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.548614 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" containerName="controller-manager" Dec 10 14:37:59 crc kubenswrapper[4727]: E1210 14:37:59.548653 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3119559f-cafa-42c6-8b54-cc1c653bf3f3" containerName="route-controller-manager" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.548666 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3119559f-cafa-42c6-8b54-cc1c653bf3f3" containerName="route-controller-manager" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.548859 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" containerName="controller-manager" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.548891 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3119559f-cafa-42c6-8b54-cc1c653bf3f3" containerName="route-controller-manager" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.550302 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.553440 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.553657 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.554014 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.554638 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.556464 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.557047 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b"] Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.557452 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.557781 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.562166 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.562754 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.563065 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.563165 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.563775 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.564471 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.567282 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-ddvqr"] Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.567723 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.582166 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b"] Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.638022 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-config\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.638073 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9d4\" (UniqueName: \"kubernetes.io/projected/b2c9f0de-110d-4513-aa8b-53230e6a2859-kube-api-access-ck9d4\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.638103 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-client-ca\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.638131 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdf4\" (UniqueName: \"kubernetes.io/projected/8121a631-cd75-4f64-ac72-37a77a7450b8-kube-api-access-kxdf4\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.638154 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-client-ca\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.638189 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c9f0de-110d-4513-aa8b-53230e6a2859-serving-cert\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.638214 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8121a631-cd75-4f64-ac72-37a77a7450b8-serving-cert\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.638242 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-config\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.638262 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-proxy-ca-bundles\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.739278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-config\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.739344 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck9d4\" (UniqueName: \"kubernetes.io/projected/b2c9f0de-110d-4513-aa8b-53230e6a2859-kube-api-access-ck9d4\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.739375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-client-ca\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.739407 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxdf4\" (UniqueName: \"kubernetes.io/projected/8121a631-cd75-4f64-ac72-37a77a7450b8-kube-api-access-kxdf4\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.739433 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-client-ca\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.739455 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c9f0de-110d-4513-aa8b-53230e6a2859-serving-cert\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.739489 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8121a631-cd75-4f64-ac72-37a77a7450b8-serving-cert\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.739515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-config\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.739537 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-proxy-ca-bundles\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.740727 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-config\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.741103 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-client-ca\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.741183 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-client-ca\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.741925 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-config\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.742573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-proxy-ca-bundles\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.745281 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c9f0de-110d-4513-aa8b-53230e6a2859-serving-cert\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.746397 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8121a631-cd75-4f64-ac72-37a77a7450b8-serving-cert\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.759070 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck9d4\" (UniqueName: \"kubernetes.io/projected/b2c9f0de-110d-4513-aa8b-53230e6a2859-kube-api-access-ck9d4\") pod \"route-controller-manager-69c79dd4cc-4w46b\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.760098 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxdf4\" (UniqueName: \"kubernetes.io/projected/8121a631-cd75-4f64-ac72-37a77a7450b8-kube-api-access-kxdf4\") pod \"controller-manager-5db558bd57-ddvqr\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.866293 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:37:59 crc kubenswrapper[4727]: I1210 14:37:59.875386 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:38:00 crc kubenswrapper[4727]: I1210 14:38:00.265962 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-ddvqr"] Dec 10 14:38:00 crc kubenswrapper[4727]: I1210 14:38:00.324139 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b"] Dec 10 14:38:00 crc kubenswrapper[4727]: W1210 14:38:00.341809 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2c9f0de_110d_4513_aa8b_53230e6a2859.slice/crio-d661161cd5ff0bec303bc39b245c3059d280a5040130d4e9309121299da9a4ce WatchSource:0}: Error finding container d661161cd5ff0bec303bc39b245c3059d280a5040130d4e9309121299da9a4ce: Status 404 returned error can't find the container with id d661161cd5ff0bec303bc39b245c3059d280a5040130d4e9309121299da9a4ce Dec 10 14:38:00 crc kubenswrapper[4727]: I1210 14:38:00.571800 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3119559f-cafa-42c6-8b54-cc1c653bf3f3" path="/var/lib/kubelet/pods/3119559f-cafa-42c6-8b54-cc1c653bf3f3/volumes" Dec 10 14:38:00 crc kubenswrapper[4727]: I1210 14:38:00.572498 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c4bd0dd-f59b-41a8-9a8b-28185ea5203e" path="/var/lib/kubelet/pods/9c4bd0dd-f59b-41a8-9a8b-28185ea5203e/volumes" Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.156780 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" event={"ID":"b2c9f0de-110d-4513-aa8b-53230e6a2859","Type":"ContainerStarted","Data":"ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90"} Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.156836 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" event={"ID":"b2c9f0de-110d-4513-aa8b-53230e6a2859","Type":"ContainerStarted","Data":"d661161cd5ff0bec303bc39b245c3059d280a5040130d4e9309121299da9a4ce"} Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.156860 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.158492 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" event={"ID":"8121a631-cd75-4f64-ac72-37a77a7450b8","Type":"ContainerStarted","Data":"5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf"} Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.158637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" event={"ID":"8121a631-cd75-4f64-ac72-37a77a7450b8","Type":"ContainerStarted","Data":"416eec9801cb6bc0b1c02cd5d7cddf89145ce1392f01e37778a717b88d0dd893"} Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.159038 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.165580 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.166205 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.181295 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" podStartSLOduration=3.181271852 podStartE2EDuration="3.181271852s" podCreationTimestamp="2025-12-10 14:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:01.17565974 +0000 UTC m=+385.370434302" watchObservedRunningTime="2025-12-10 14:38:01.181271852 +0000 UTC m=+385.376046394" Dec 10 14:38:01 crc kubenswrapper[4727]: I1210 14:38:01.195204 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" podStartSLOduration=3.19517916 podStartE2EDuration="3.19517916s" podCreationTimestamp="2025-12-10 14:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:01.194747268 +0000 UTC m=+385.389521820" watchObservedRunningTime="2025-12-10 14:38:01.19517916 +0000 UTC m=+385.389953702" Dec 10 14:38:07 crc kubenswrapper[4727]: I1210 14:38:07.724198 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:38:07 crc kubenswrapper[4727]: I1210 14:38:07.724950 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:38:09 crc kubenswrapper[4727]: I1210 14:38:09.475411 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b"] Dec 10 14:38:09 crc kubenswrapper[4727]: I1210 14:38:09.475883 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" podUID="b2c9f0de-110d-4513-aa8b-53230e6a2859" containerName="route-controller-manager" containerID="cri-o://ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90" gracePeriod=30 Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.043285 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.189027 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c9f0de-110d-4513-aa8b-53230e6a2859-serving-cert\") pod \"b2c9f0de-110d-4513-aa8b-53230e6a2859\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.189110 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-config\") pod \"b2c9f0de-110d-4513-aa8b-53230e6a2859\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.189192 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck9d4\" (UniqueName: \"kubernetes.io/projected/b2c9f0de-110d-4513-aa8b-53230e6a2859-kube-api-access-ck9d4\") pod \"b2c9f0de-110d-4513-aa8b-53230e6a2859\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.189234 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-client-ca\") pod \"b2c9f0de-110d-4513-aa8b-53230e6a2859\" (UID: \"b2c9f0de-110d-4513-aa8b-53230e6a2859\") " Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.190199 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-config" (OuterVolumeSpecName: "config") pod "b2c9f0de-110d-4513-aa8b-53230e6a2859" (UID: "b2c9f0de-110d-4513-aa8b-53230e6a2859"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.190208 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-client-ca" (OuterVolumeSpecName: "client-ca") pod "b2c9f0de-110d-4513-aa8b-53230e6a2859" (UID: "b2c9f0de-110d-4513-aa8b-53230e6a2859"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.196610 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c9f0de-110d-4513-aa8b-53230e6a2859-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b2c9f0de-110d-4513-aa8b-53230e6a2859" (UID: "b2c9f0de-110d-4513-aa8b-53230e6a2859"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.197006 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c9f0de-110d-4513-aa8b-53230e6a2859-kube-api-access-ck9d4" (OuterVolumeSpecName: "kube-api-access-ck9d4") pod "b2c9f0de-110d-4513-aa8b-53230e6a2859" (UID: "b2c9f0de-110d-4513-aa8b-53230e6a2859"). InnerVolumeSpecName "kube-api-access-ck9d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.218954 4727 generic.go:334] "Generic (PLEG): container finished" podID="b2c9f0de-110d-4513-aa8b-53230e6a2859" containerID="ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90" exitCode=0 Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.219012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" event={"ID":"b2c9f0de-110d-4513-aa8b-53230e6a2859","Type":"ContainerDied","Data":"ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90"} Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.219054 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" event={"ID":"b2c9f0de-110d-4513-aa8b-53230e6a2859","Type":"ContainerDied","Data":"d661161cd5ff0bec303bc39b245c3059d280a5040130d4e9309121299da9a4ce"} Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.219078 4727 scope.go:117] "RemoveContainer" containerID="ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.219094 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.249068 4727 scope.go:117] "RemoveContainer" containerID="ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90" Dec 10 14:38:10 crc kubenswrapper[4727]: E1210 14:38:10.249798 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90\": container with ID starting with ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90 not found: ID does not exist" containerID="ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.249836 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90"} err="failed to get container status \"ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90\": rpc error: code = NotFound desc = could not find container \"ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90\": container with ID starting with ff3e626464f2eb2cd44c7a88b2e8a78259e6de39c3ed66a1a7a3e83bee1f7a90 not found: ID does not exist" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.260543 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b"] Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.264660 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b"] Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.290936 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.290994 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c9f0de-110d-4513-aa8b-53230e6a2859-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.291009 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c9f0de-110d-4513-aa8b-53230e6a2859-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.291025 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck9d4\" (UniqueName: \"kubernetes.io/projected/b2c9f0de-110d-4513-aa8b-53230e6a2859-kube-api-access-ck9d4\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.549998 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f"] Dec 10 14:38:10 crc kubenswrapper[4727]: E1210 14:38:10.551234 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c9f0de-110d-4513-aa8b-53230e6a2859" containerName="route-controller-manager" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.551347 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c9f0de-110d-4513-aa8b-53230e6a2859" containerName="route-controller-manager" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.551534 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c9f0de-110d-4513-aa8b-53230e6a2859" containerName="route-controller-manager" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.552079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.555652 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.555669 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.555825 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.555947 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.556035 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.556137 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.572128 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c9f0de-110d-4513-aa8b-53230e6a2859" path="/var/lib/kubelet/pods/b2c9f0de-110d-4513-aa8b-53230e6a2859/volumes" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.572797 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f"] Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.595274 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-serving-cert\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.595428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czk46\" (UniqueName: \"kubernetes.io/projected/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-kube-api-access-czk46\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.595508 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-config\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.595549 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-client-ca\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.696702 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czk46\" (UniqueName: \"kubernetes.io/projected/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-kube-api-access-czk46\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.696782 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-config\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.696819 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-client-ca\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.696870 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-serving-cert\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.698427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-client-ca\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.698621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-config\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.703651 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-serving-cert\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.718485 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czk46\" (UniqueName: \"kubernetes.io/projected/0a69dd5a-bff9-4f60-b67d-482d8fbfdd87-kube-api-access-czk46\") pod \"route-controller-manager-665dbdc779-bhs5f\" (UID: \"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87\") " pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.876554 4727 patch_prober.go:28] interesting pod/route-controller-manager-69c79dd4cc-4w46b container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.876637 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-4w46b" podUID="b2c9f0de-110d-4513-aa8b-53230e6a2859" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 14:38:10 crc kubenswrapper[4727]: I1210 14:38:10.883013 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:11 crc kubenswrapper[4727]: I1210 14:38:11.301519 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f"] Dec 10 14:38:11 crc kubenswrapper[4727]: W1210 14:38:11.307741 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a69dd5a_bff9_4f60_b67d_482d8fbfdd87.slice/crio-606413c90c9bd1d0ff7ae54d227c6dca52338820a779128fdc853ee2d2cb9818 WatchSource:0}: Error finding container 606413c90c9bd1d0ff7ae54d227c6dca52338820a779128fdc853ee2d2cb9818: Status 404 returned error can't find the container with id 606413c90c9bd1d0ff7ae54d227c6dca52338820a779128fdc853ee2d2cb9818 Dec 10 14:38:12 crc kubenswrapper[4727]: I1210 14:38:12.235462 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" event={"ID":"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87","Type":"ContainerStarted","Data":"6c41d67c4923bc18ab7051d7de06fa60c6fd84f0ced9612bd4d598a416d090be"} Dec 10 14:38:12 crc kubenswrapper[4727]: I1210 14:38:12.236890 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:12 crc kubenswrapper[4727]: I1210 14:38:12.236951 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" event={"ID":"0a69dd5a-bff9-4f60-b67d-482d8fbfdd87","Type":"ContainerStarted","Data":"606413c90c9bd1d0ff7ae54d227c6dca52338820a779128fdc853ee2d2cb9818"} Dec 10 14:38:12 crc kubenswrapper[4727]: I1210 14:38:12.241981 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" Dec 10 14:38:12 crc kubenswrapper[4727]: I1210 14:38:12.257167 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-665dbdc779-bhs5f" podStartSLOduration=3.2571461250000002 podStartE2EDuration="3.257146125s" podCreationTimestamp="2025-12-10 14:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:12.253570846 +0000 UTC m=+396.448345388" watchObservedRunningTime="2025-12-10 14:38:12.257146125 +0000 UTC m=+396.451920667" Dec 10 14:38:26 crc kubenswrapper[4727]: I1210 14:38:26.979790 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfkcf"] Dec 10 14:38:26 crc kubenswrapper[4727]: I1210 14:38:26.980603 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xfkcf" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerName="registry-server" containerID="cri-o://707c387928ee10f70d92f246adb5c0768b549e63c9690ac5e278a2d3b1d2c5e2" gracePeriod=2 Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.179638 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s2hk4"] Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.179960 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s2hk4" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerName="registry-server" containerID="cri-o://cf77e0941c682bdc324f57927bd9ccf89d7e29866d3483de41a842257c13fc7e" gracePeriod=2 Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.319356 4727 generic.go:334] "Generic (PLEG): container finished" podID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerID="707c387928ee10f70d92f246adb5c0768b549e63c9690ac5e278a2d3b1d2c5e2" exitCode=0 Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.319415 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfkcf" event={"ID":"98bd9482-a59d-4e44-ba30-6a0277bcb2ae","Type":"ContainerDied","Data":"707c387928ee10f70d92f246adb5c0768b549e63c9690ac5e278a2d3b1d2c5e2"} Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.323630 4727 generic.go:334] "Generic (PLEG): container finished" podID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerID="cf77e0941c682bdc324f57927bd9ccf89d7e29866d3483de41a842257c13fc7e" exitCode=0 Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.323668 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2hk4" event={"ID":"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96","Type":"ContainerDied","Data":"cf77e0941c682bdc324f57927bd9ccf89d7e29866d3483de41a842257c13fc7e"} Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.699600 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.831741 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-utilities\") pod \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.831880 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-catalog-content\") pod \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.831947 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57x4b\" (UniqueName: \"kubernetes.io/projected/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-kube-api-access-57x4b\") pod \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\" (UID: \"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96\") " Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.832956 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-utilities" (OuterVolumeSpecName: "utilities") pod "44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" (UID: "44e5c54b-8d4a-4435-bc9e-a93dc0b37e96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.847114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-kube-api-access-57x4b" (OuterVolumeSpecName: "kube-api-access-57x4b") pod "44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" (UID: "44e5c54b-8d4a-4435-bc9e-a93dc0b37e96"). InnerVolumeSpecName "kube-api-access-57x4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.882755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" (UID: "44e5c54b-8d4a-4435-bc9e-a93dc0b37e96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.920279 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.933163 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.933201 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:27 crc kubenswrapper[4727]: I1210 14:38:27.933240 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57x4b\" (UniqueName: \"kubernetes.io/projected/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96-kube-api-access-57x4b\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.034360 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-utilities\") pod \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.034542 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-catalog-content\") pod \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.034576 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tggrq\" (UniqueName: \"kubernetes.io/projected/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-kube-api-access-tggrq\") pod \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\" (UID: \"98bd9482-a59d-4e44-ba30-6a0277bcb2ae\") " Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.035700 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-utilities" (OuterVolumeSpecName: "utilities") pod "98bd9482-a59d-4e44-ba30-6a0277bcb2ae" (UID: "98bd9482-a59d-4e44-ba30-6a0277bcb2ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.039308 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-kube-api-access-tggrq" (OuterVolumeSpecName: "kube-api-access-tggrq") pod "98bd9482-a59d-4e44-ba30-6a0277bcb2ae" (UID: "98bd9482-a59d-4e44-ba30-6a0277bcb2ae"). InnerVolumeSpecName "kube-api-access-tggrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.083690 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98bd9482-a59d-4e44-ba30-6a0277bcb2ae" (UID: "98bd9482-a59d-4e44-ba30-6a0277bcb2ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.135885 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tggrq\" (UniqueName: \"kubernetes.io/projected/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-kube-api-access-tggrq\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.135933 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.135948 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bd9482-a59d-4e44-ba30-6a0277bcb2ae-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.351670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfkcf" event={"ID":"98bd9482-a59d-4e44-ba30-6a0277bcb2ae","Type":"ContainerDied","Data":"0042ebdcc288a9e1e32de1f86655fa3676ed693ad643bf1c98015f20593d01d6"} Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.352083 4727 scope.go:117] "RemoveContainer" containerID="707c387928ee10f70d92f246adb5c0768b549e63c9690ac5e278a2d3b1d2c5e2" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.352353 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfkcf" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.361934 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2hk4" event={"ID":"44e5c54b-8d4a-4435-bc9e-a93dc0b37e96","Type":"ContainerDied","Data":"94e5e8f5f38763b965382517dfce42503f1a8efe07a97b6e1cd30824a43787b1"} Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.362065 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2hk4" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.382361 4727 scope.go:117] "RemoveContainer" containerID="fd515ccf4f46612e25578ee57df0a511a672c25c95f264db50f31962b5b29a75" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.388730 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfkcf"] Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.394899 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xfkcf"] Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.404233 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s2hk4"] Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.413501 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s2hk4"] Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.416962 4727 scope.go:117] "RemoveContainer" containerID="382c359e531d100c90d1519f7d51016503308a8283ce725ea287a7bde57ae21c" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.431457 4727 scope.go:117] "RemoveContainer" containerID="cf77e0941c682bdc324f57927bd9ccf89d7e29866d3483de41a842257c13fc7e" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.451502 4727 scope.go:117] "RemoveContainer" containerID="0fdddf070bd12fd4f4aa164653ea641b998391ca75754b86b5bdb93cbe68dfc6" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.469761 4727 scope.go:117] "RemoveContainer" containerID="74f546d67665f1ed34428d9c57ee8f2f99b69e32bba2d8c08d9618aaa57c5b71" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.572217 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" path="/var/lib/kubelet/pods/44e5c54b-8d4a-4435-bc9e-a93dc0b37e96/volumes" Dec 10 14:38:28 crc kubenswrapper[4727]: I1210 14:38:28.573232 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" path="/var/lib/kubelet/pods/98bd9482-a59d-4e44-ba30-6a0277bcb2ae/volumes" Dec 10 14:38:29 crc kubenswrapper[4727]: I1210 14:38:29.383974 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns2g4"] Dec 10 14:38:29 crc kubenswrapper[4727]: I1210 14:38:29.385105 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ns2g4" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerName="registry-server" containerID="cri-o://0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47" gracePeriod=2 Dec 10 14:38:29 crc kubenswrapper[4727]: I1210 14:38:29.582023 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4sqds"] Dec 10 14:38:29 crc kubenswrapper[4727]: I1210 14:38:29.582621 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4sqds" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerName="registry-server" containerID="cri-o://c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9" gracePeriod=2 Dec 10 14:38:29 crc kubenswrapper[4727]: I1210 14:38:29.907411 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.007499 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.066049 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-utilities\") pod \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.066371 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c98zl\" (UniqueName: \"kubernetes.io/projected/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-kube-api-access-c98zl\") pod \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.066526 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-catalog-content\") pod \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\" (UID: \"4da708e0-26ae-4bf4-ab5c-ca793fc6e207\") " Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.066940 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-utilities" (OuterVolumeSpecName: "utilities") pod "4da708e0-26ae-4bf4-ab5c-ca793fc6e207" (UID: "4da708e0-26ae-4bf4-ab5c-ca793fc6e207"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.070920 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-kube-api-access-c98zl" (OuterVolumeSpecName: "kube-api-access-c98zl") pod "4da708e0-26ae-4bf4-ab5c-ca793fc6e207" (UID: "4da708e0-26ae-4bf4-ab5c-ca793fc6e207"). InnerVolumeSpecName "kube-api-access-c98zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.086126 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4da708e0-26ae-4bf4-ab5c-ca793fc6e207" (UID: "4da708e0-26ae-4bf4-ab5c-ca793fc6e207"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.167635 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzhbg\" (UniqueName: \"kubernetes.io/projected/dea7b38c-5f36-498f-93c4-23e849473cb4-kube-api-access-nzhbg\") pod \"dea7b38c-5f36-498f-93c4-23e849473cb4\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.167818 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-catalog-content\") pod \"dea7b38c-5f36-498f-93c4-23e849473cb4\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.167866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-utilities\") pod \"dea7b38c-5f36-498f-93c4-23e849473cb4\" (UID: \"dea7b38c-5f36-498f-93c4-23e849473cb4\") " Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.168139 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.168157 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c98zl\" (UniqueName: \"kubernetes.io/projected/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-kube-api-access-c98zl\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.168170 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da708e0-26ae-4bf4-ab5c-ca793fc6e207-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.168928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-utilities" (OuterVolumeSpecName: "utilities") pod "dea7b38c-5f36-498f-93c4-23e849473cb4" (UID: "dea7b38c-5f36-498f-93c4-23e849473cb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.171138 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea7b38c-5f36-498f-93c4-23e849473cb4-kube-api-access-nzhbg" (OuterVolumeSpecName: "kube-api-access-nzhbg") pod "dea7b38c-5f36-498f-93c4-23e849473cb4" (UID: "dea7b38c-5f36-498f-93c4-23e849473cb4"). InnerVolumeSpecName "kube-api-access-nzhbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.269642 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.269715 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzhbg\" (UniqueName: \"kubernetes.io/projected/dea7b38c-5f36-498f-93c4-23e849473cb4-kube-api-access-nzhbg\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.270077 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dea7b38c-5f36-498f-93c4-23e849473cb4" (UID: "dea7b38c-5f36-498f-93c4-23e849473cb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.371230 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7b38c-5f36-498f-93c4-23e849473cb4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.387989 4727 generic.go:334] "Generic (PLEG): container finished" podID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerID="0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47" exitCode=0 Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.388084 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns2g4" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.388144 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns2g4" event={"ID":"4da708e0-26ae-4bf4-ab5c-ca793fc6e207","Type":"ContainerDied","Data":"0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47"} Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.388190 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns2g4" event={"ID":"4da708e0-26ae-4bf4-ab5c-ca793fc6e207","Type":"ContainerDied","Data":"ff508e9bba23a442780d3eee6e1299fb898916fd776216b02fe009896b8e7e03"} Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.388222 4727 scope.go:117] "RemoveContainer" containerID="0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.391732 4727 generic.go:334] "Generic (PLEG): container finished" podID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerID="c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9" exitCode=0 Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.391795 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sqds" event={"ID":"dea7b38c-5f36-498f-93c4-23e849473cb4","Type":"ContainerDied","Data":"c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9"} Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.391830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sqds" event={"ID":"dea7b38c-5f36-498f-93c4-23e849473cb4","Type":"ContainerDied","Data":"b22f00ea38501b9ae1a14b5a74195a10dfb9e291ba587c71cb9ac1ae8228e8e5"} Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.391936 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4sqds" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.407310 4727 scope.go:117] "RemoveContainer" containerID="819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.421718 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns2g4"] Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.425638 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns2g4"] Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.438104 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4sqds"] Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.442305 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4sqds"] Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.448358 4727 scope.go:117] "RemoveContainer" containerID="ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.465693 4727 scope.go:117] "RemoveContainer" containerID="0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47" Dec 10 14:38:30 crc kubenswrapper[4727]: E1210 14:38:30.466284 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47\": container with ID starting with 0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47 not found: ID does not exist" containerID="0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.466399 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47"} err="failed to get container status \"0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47\": rpc error: code = NotFound desc = could not find container \"0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47\": container with ID starting with 0baa338d94fc2e5499cdee6f61f4836340c16dcf1efb1cd7cdbffe14ce03ed47 not found: ID does not exist" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.466449 4727 scope.go:117] "RemoveContainer" containerID="819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378" Dec 10 14:38:30 crc kubenswrapper[4727]: E1210 14:38:30.466830 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378\": container with ID starting with 819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378 not found: ID does not exist" containerID="819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.466870 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378"} err="failed to get container status \"819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378\": rpc error: code = NotFound desc = could not find container \"819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378\": container with ID starting with 819960a4b10519e264cbe88aa183c2bbab7bb01b1cab20e945a23137409d4378 not found: ID does not exist" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.466938 4727 scope.go:117] "RemoveContainer" containerID="ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33" Dec 10 14:38:30 crc kubenswrapper[4727]: E1210 14:38:30.467526 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33\": container with ID starting with ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33 not found: ID does not exist" containerID="ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.467549 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33"} err="failed to get container status \"ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33\": rpc error: code = NotFound desc = could not find container \"ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33\": container with ID starting with ea284af1f5eefc5417d9b04f7c49d339aa5d0f7a3e385e82ce35340bd9f18e33 not found: ID does not exist" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.467562 4727 scope.go:117] "RemoveContainer" containerID="c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.485020 4727 scope.go:117] "RemoveContainer" containerID="8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.507485 4727 scope.go:117] "RemoveContainer" containerID="a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.555964 4727 scope.go:117] "RemoveContainer" containerID="c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9" Dec 10 14:38:30 crc kubenswrapper[4727]: E1210 14:38:30.556478 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9\": container with ID starting with c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9 not found: ID does not exist" containerID="c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.556767 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9"} err="failed to get container status \"c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9\": rpc error: code = NotFound desc = could not find container \"c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9\": container with ID starting with c4a706a084c074700617fa3cbf3733f58fc461c759567ed1169ba3e0dc6902b9 not found: ID does not exist" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.556932 4727 scope.go:117] "RemoveContainer" containerID="8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01" Dec 10 14:38:30 crc kubenswrapper[4727]: E1210 14:38:30.557399 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01\": container with ID starting with 8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01 not found: ID does not exist" containerID="8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.557445 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01"} err="failed to get container status \"8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01\": rpc error: code = NotFound desc = could not find container \"8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01\": container with ID starting with 8ce0583ef3d4e343fe14b5f91442690a11557554538715163e4aca09800abf01 not found: ID does not exist" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.557481 4727 scope.go:117] "RemoveContainer" containerID="a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d" Dec 10 14:38:30 crc kubenswrapper[4727]: E1210 14:38:30.567420 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d\": container with ID starting with a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d not found: ID does not exist" containerID="a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.567486 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d"} err="failed to get container status \"a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d\": rpc error: code = NotFound desc = could not find container \"a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d\": container with ID starting with a9bbe55ee0df656f51413b43f71e7bab66683dcca84d2d7a3841515adbccee0d not found: ID does not exist" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.579055 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" path="/var/lib/kubelet/pods/4da708e0-26ae-4bf4-ab5c-ca793fc6e207/volumes" Dec 10 14:38:30 crc kubenswrapper[4727]: I1210 14:38:30.579960 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" path="/var/lib/kubelet/pods/dea7b38c-5f36-498f-93c4-23e849473cb4/volumes" Dec 10 14:38:37 crc kubenswrapper[4727]: I1210 14:38:37.724589 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:38:37 crc kubenswrapper[4727]: I1210 14:38:37.725172 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.081617 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qhhcq"] Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082222 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerName="extract-content" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082240 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerName="extract-content" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082253 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerName="extract-content" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082259 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerName="extract-content" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082275 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerName="extract-utilities" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082282 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerName="extract-utilities" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082290 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082296 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082305 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerName="extract-utilities" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082310 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerName="extract-utilities" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082321 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerName="extract-content" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082327 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerName="extract-content" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082337 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerName="extract-content" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082343 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerName="extract-content" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082353 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerName="extract-utilities" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082361 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerName="extract-utilities" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082370 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082377 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082387 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082394 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082401 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082409 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: E1210 14:38:40.082421 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerName="extract-utilities" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082427 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerName="extract-utilities" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082558 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bd9482-a59d-4e44-ba30-6a0277bcb2ae" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082575 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da708e0-26ae-4bf4-ab5c-ca793fc6e207" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082587 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea7b38c-5f36-498f-93c4-23e849473cb4" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.082595 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e5c54b-8d4a-4435-bc9e-a93dc0b37e96" containerName="registry-server" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.083175 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.095706 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qhhcq"] Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.190784 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55177560-3030-4ff1-b6e1-792ba962a089-registry-certificates\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.190863 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxct\" (UniqueName: \"kubernetes.io/projected/55177560-3030-4ff1-b6e1-792ba962a089-kube-api-access-fcxct\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.190919 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55177560-3030-4ff1-b6e1-792ba962a089-trusted-ca\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.191132 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55177560-3030-4ff1-b6e1-792ba962a089-registry-tls\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.191186 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55177560-3030-4ff1-b6e1-792ba962a089-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.191232 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55177560-3030-4ff1-b6e1-792ba962a089-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.191299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.191372 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55177560-3030-4ff1-b6e1-792ba962a089-bound-sa-token\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.217121 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.319801 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55177560-3030-4ff1-b6e1-792ba962a089-registry-tls\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.319866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55177560-3030-4ff1-b6e1-792ba962a089-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.319923 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55177560-3030-4ff1-b6e1-792ba962a089-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.319963 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55177560-3030-4ff1-b6e1-792ba962a089-bound-sa-token\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.320060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55177560-3030-4ff1-b6e1-792ba962a089-registry-certificates\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.320085 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcxct\" (UniqueName: \"kubernetes.io/projected/55177560-3030-4ff1-b6e1-792ba962a089-kube-api-access-fcxct\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.320116 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55177560-3030-4ff1-b6e1-792ba962a089-trusted-ca\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.320898 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55177560-3030-4ff1-b6e1-792ba962a089-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.321614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55177560-3030-4ff1-b6e1-792ba962a089-trusted-ca\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.322310 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55177560-3030-4ff1-b6e1-792ba962a089-registry-certificates\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.327854 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55177560-3030-4ff1-b6e1-792ba962a089-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.328233 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55177560-3030-4ff1-b6e1-792ba962a089-registry-tls\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.342422 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55177560-3030-4ff1-b6e1-792ba962a089-bound-sa-token\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.347876 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcxct\" (UniqueName: \"kubernetes.io/projected/55177560-3030-4ff1-b6e1-792ba962a089-kube-api-access-fcxct\") pod \"image-registry-66df7c8f76-qhhcq\" (UID: \"55177560-3030-4ff1-b6e1-792ba962a089\") " pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.403520 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:40 crc kubenswrapper[4727]: I1210 14:38:40.828078 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qhhcq"] Dec 10 14:38:40 crc kubenswrapper[4727]: W1210 14:38:40.840097 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55177560_3030_4ff1_b6e1_792ba962a089.slice/crio-5f263182f0a2776b32b834a885d8f14cfbfe110daef0efcee5c262b8f96bf86b WatchSource:0}: Error finding container 5f263182f0a2776b32b834a885d8f14cfbfe110daef0efcee5c262b8f96bf86b: Status 404 returned error can't find the container with id 5f263182f0a2776b32b834a885d8f14cfbfe110daef0efcee5c262b8f96bf86b Dec 10 14:38:41 crc kubenswrapper[4727]: I1210 14:38:41.458575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" event={"ID":"55177560-3030-4ff1-b6e1-792ba962a089","Type":"ContainerStarted","Data":"548fe7074b7089119f3dbf2b5e1d279229f7069069853bd17575be18efb888cc"} Dec 10 14:38:41 crc kubenswrapper[4727]: I1210 14:38:41.458644 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" event={"ID":"55177560-3030-4ff1-b6e1-792ba962a089","Type":"ContainerStarted","Data":"5f263182f0a2776b32b834a885d8f14cfbfe110daef0efcee5c262b8f96bf86b"} Dec 10 14:38:41 crc kubenswrapper[4727]: I1210 14:38:41.458842 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:38:41 crc kubenswrapper[4727]: I1210 14:38:41.479737 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" podStartSLOduration=1.479693562 podStartE2EDuration="1.479693562s" podCreationTimestamp="2025-12-10 14:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:41.477686706 +0000 UTC m=+425.672461248" watchObservedRunningTime="2025-12-10 14:38:41.479693562 +0000 UTC m=+425.674468134" Dec 10 14:38:49 crc kubenswrapper[4727]: I1210 14:38:49.465373 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-ddvqr"] Dec 10 14:38:49 crc kubenswrapper[4727]: I1210 14:38:49.466100 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" podUID="8121a631-cd75-4f64-ac72-37a77a7450b8" containerName="controller-manager" containerID="cri-o://5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf" gracePeriod=30 Dec 10 14:38:49 crc kubenswrapper[4727]: I1210 14:38:49.867613 4727 patch_prober.go:28] interesting pod/controller-manager-5db558bd57-ddvqr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Dec 10 14:38:49 crc kubenswrapper[4727]: I1210 14:38:49.867939 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" podUID="8121a631-cd75-4f64-ac72-37a77a7450b8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.373170 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.505594 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-client-ca\") pod \"8121a631-cd75-4f64-ac72-37a77a7450b8\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.505692 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxdf4\" (UniqueName: \"kubernetes.io/projected/8121a631-cd75-4f64-ac72-37a77a7450b8-kube-api-access-kxdf4\") pod \"8121a631-cd75-4f64-ac72-37a77a7450b8\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.505723 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-proxy-ca-bundles\") pod \"8121a631-cd75-4f64-ac72-37a77a7450b8\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.505752 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8121a631-cd75-4f64-ac72-37a77a7450b8-serving-cert\") pod \"8121a631-cd75-4f64-ac72-37a77a7450b8\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.505811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-config\") pod \"8121a631-cd75-4f64-ac72-37a77a7450b8\" (UID: \"8121a631-cd75-4f64-ac72-37a77a7450b8\") " Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.506684 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8121a631-cd75-4f64-ac72-37a77a7450b8" (UID: "8121a631-cd75-4f64-ac72-37a77a7450b8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.506730 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-config" (OuterVolumeSpecName: "config") pod "8121a631-cd75-4f64-ac72-37a77a7450b8" (UID: "8121a631-cd75-4f64-ac72-37a77a7450b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.506699 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-client-ca" (OuterVolumeSpecName: "client-ca") pod "8121a631-cd75-4f64-ac72-37a77a7450b8" (UID: "8121a631-cd75-4f64-ac72-37a77a7450b8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.511624 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8121a631-cd75-4f64-ac72-37a77a7450b8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8121a631-cd75-4f64-ac72-37a77a7450b8" (UID: "8121a631-cd75-4f64-ac72-37a77a7450b8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.512509 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8121a631-cd75-4f64-ac72-37a77a7450b8-kube-api-access-kxdf4" (OuterVolumeSpecName: "kube-api-access-kxdf4") pod "8121a631-cd75-4f64-ac72-37a77a7450b8" (UID: "8121a631-cd75-4f64-ac72-37a77a7450b8"). InnerVolumeSpecName "kube-api-access-kxdf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.582186 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db"] Dec 10 14:38:50 crc kubenswrapper[4727]: E1210 14:38:50.583001 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8121a631-cd75-4f64-ac72-37a77a7450b8" containerName="controller-manager" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.583092 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8121a631-cd75-4f64-ac72-37a77a7450b8" containerName="controller-manager" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.583293 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8121a631-cd75-4f64-ac72-37a77a7450b8" containerName="controller-manager" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.583920 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.585063 4727 generic.go:334] "Generic (PLEG): container finished" podID="8121a631-cd75-4f64-ac72-37a77a7450b8" containerID="5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf" exitCode=0 Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.585167 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" event={"ID":"8121a631-cd75-4f64-ac72-37a77a7450b8","Type":"ContainerDied","Data":"5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf"} Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.585206 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" event={"ID":"8121a631-cd75-4f64-ac72-37a77a7450b8","Type":"ContainerDied","Data":"416eec9801cb6bc0b1c02cd5d7cddf89145ce1392f01e37778a717b88d0dd893"} Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.585230 4727 scope.go:117] "RemoveContainer" containerID="5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.585409 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db558bd57-ddvqr" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.600200 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db"] Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.607641 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.607677 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxdf4\" (UniqueName: \"kubernetes.io/projected/8121a631-cd75-4f64-ac72-37a77a7450b8-kube-api-access-kxdf4\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.607689 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.607700 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8121a631-cd75-4f64-ac72-37a77a7450b8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.608212 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8121a631-cd75-4f64-ac72-37a77a7450b8-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.623506 4727 scope.go:117] "RemoveContainer" containerID="5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf" Dec 10 14:38:50 crc kubenswrapper[4727]: E1210 14:38:50.624650 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf\": container with ID starting with 5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf not found: ID does not exist" containerID="5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.624791 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf"} err="failed to get container status \"5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf\": rpc error: code = NotFound desc = could not find container \"5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf\": container with ID starting with 5df065c5cc0e8c5667c7c5dded1b097a4c664d616e4be11474c6c2fe3ed6fddf not found: ID does not exist" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.638868 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-ddvqr"] Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.642095 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-ddvqr"] Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.709825 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8558a2f0-0223-452b-a0cc-45b01ed8449b-config\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.709963 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8558a2f0-0223-452b-a0cc-45b01ed8449b-serving-cert\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.710041 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8558a2f0-0223-452b-a0cc-45b01ed8449b-client-ca\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.710076 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97cvf\" (UniqueName: \"kubernetes.io/projected/8558a2f0-0223-452b-a0cc-45b01ed8449b-kube-api-access-97cvf\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.710157 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8558a2f0-0223-452b-a0cc-45b01ed8449b-proxy-ca-bundles\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.811342 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8558a2f0-0223-452b-a0cc-45b01ed8449b-proxy-ca-bundles\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.811683 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8558a2f0-0223-452b-a0cc-45b01ed8449b-config\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.811823 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8558a2f0-0223-452b-a0cc-45b01ed8449b-serving-cert\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.812021 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8558a2f0-0223-452b-a0cc-45b01ed8449b-client-ca\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.812129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97cvf\" (UniqueName: \"kubernetes.io/projected/8558a2f0-0223-452b-a0cc-45b01ed8449b-kube-api-access-97cvf\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.812846 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8558a2f0-0223-452b-a0cc-45b01ed8449b-client-ca\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.812980 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8558a2f0-0223-452b-a0cc-45b01ed8449b-config\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.813484 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8558a2f0-0223-452b-a0cc-45b01ed8449b-proxy-ca-bundles\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.817198 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8558a2f0-0223-452b-a0cc-45b01ed8449b-serving-cert\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.846381 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97cvf\" (UniqueName: \"kubernetes.io/projected/8558a2f0-0223-452b-a0cc-45b01ed8449b-kube-api-access-97cvf\") pod \"controller-manager-64dbdd9d9d-4q7db\" (UID: \"8558a2f0-0223-452b-a0cc-45b01ed8449b\") " pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:50 crc kubenswrapper[4727]: I1210 14:38:50.925819 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:51 crc kubenswrapper[4727]: I1210 14:38:51.124865 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db"] Dec 10 14:38:51 crc kubenswrapper[4727]: I1210 14:38:51.595257 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" event={"ID":"8558a2f0-0223-452b-a0cc-45b01ed8449b","Type":"ContainerStarted","Data":"e7e5f6373031b09e9b4482c51d62d93eaf78c2f2b29cd7566718e9e956b5e33b"} Dec 10 14:38:51 crc kubenswrapper[4727]: I1210 14:38:51.595628 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" event={"ID":"8558a2f0-0223-452b-a0cc-45b01ed8449b","Type":"ContainerStarted","Data":"345d9c8e48ca4db7145fde640906bd9fb70e6aa9637f1a19f71c41aa41e4e2e1"} Dec 10 14:38:51 crc kubenswrapper[4727]: I1210 14:38:51.595650 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:51 crc kubenswrapper[4727]: I1210 14:38:51.603781 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" Dec 10 14:38:51 crc kubenswrapper[4727]: I1210 14:38:51.615940 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64dbdd9d9d-4q7db" podStartSLOduration=2.615890582 podStartE2EDuration="2.615890582s" podCreationTimestamp="2025-12-10 14:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:51.615364518 +0000 UTC m=+435.810139060" watchObservedRunningTime="2025-12-10 14:38:51.615890582 +0000 UTC m=+435.810665124" Dec 10 14:38:52 crc kubenswrapper[4727]: I1210 14:38:52.571994 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8121a631-cd75-4f64-ac72-37a77a7450b8" path="/var/lib/kubelet/pods/8121a631-cd75-4f64-ac72-37a77a7450b8/volumes" Dec 10 14:39:00 crc kubenswrapper[4727]: I1210 14:39:00.410040 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qhhcq" Dec 10 14:39:00 crc kubenswrapper[4727]: I1210 14:39:00.473166 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n87wt"] Dec 10 14:39:07 crc kubenswrapper[4727]: I1210 14:39:07.724061 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:39:07 crc kubenswrapper[4727]: I1210 14:39:07.724667 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:39:07 crc kubenswrapper[4727]: I1210 14:39:07.724722 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:39:07 crc kubenswrapper[4727]: I1210 14:39:07.741200 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51efd2633926a32a5d87ebb689d788f19863dd8e68c34097ba24f8c6d596b5ea"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:39:07 crc kubenswrapper[4727]: I1210 14:39:07.741492 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://51efd2633926a32a5d87ebb689d788f19863dd8e68c34097ba24f8c6d596b5ea" gracePeriod=600 Dec 10 14:39:08 crc kubenswrapper[4727]: I1210 14:39:08.692505 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="51efd2633926a32a5d87ebb689d788f19863dd8e68c34097ba24f8c6d596b5ea" exitCode=0 Dec 10 14:39:08 crc kubenswrapper[4727]: I1210 14:39:08.692621 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"51efd2633926a32a5d87ebb689d788f19863dd8e68c34097ba24f8c6d596b5ea"} Dec 10 14:39:08 crc kubenswrapper[4727]: I1210 14:39:08.692856 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"6962415495a564f00fc2a840efe560b2ce35c3f78b92a0cc2b2c5231d5185282"} Dec 10 14:39:08 crc kubenswrapper[4727]: I1210 14:39:08.692878 4727 scope.go:117] "RemoveContainer" containerID="7d3fb537634ebc363f6146da7c7355ff076b718903b2bd056cb2f8ae7977748f" Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.932848 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g7k77"] Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.933741 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g7k77" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerName="registry-server" containerID="cri-o://f41dbd47d6539b9eb3becb8d428d718c1430fa8c7b4672590328bf59d5089c18" gracePeriod=30 Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.946838 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5v95"] Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.947317 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p5v95" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" containerName="registry-server" containerID="cri-o://3fed7902c2e1129bcc902f5aa5cac74edb31cf0b54952dffbeca558e9dc38656" gracePeriod=30 Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.966135 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6w2jq"] Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.966322 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" containerID="cri-o://52e4d345e54a2ebb19127773b70b7376a769e901f187f0514077d94534751e6e" gracePeriod=30 Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.980616 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-thvgk"] Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.981202 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-thvgk" podUID="517d22f8-c007-4428-82f1-1fe55445d509" containerName="registry-server" containerID="cri-o://b5af219de67553d9df393048ab11ead0c71d1c48718358201ea2e50758d11505" gracePeriod=30 Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.986711 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9v4p4"] Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.990789 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:10 crc kubenswrapper[4727]: I1210 14:39:10.999515 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qn5hx"] Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.000106 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qn5hx" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerName="registry-server" containerID="cri-o://092365009e80e94fe7f922e5b01c4091eecb273e99cce46e57f871abb511385d" gracePeriod=30 Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.008746 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9v4p4"] Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.126695 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0925ef8-5391-40fa-a9a9-898f5dfdec33-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9v4p4\" (UID: \"c0925ef8-5391-40fa-a9a9-898f5dfdec33\") " pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.126822 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8zr\" (UniqueName: \"kubernetes.io/projected/c0925ef8-5391-40fa-a9a9-898f5dfdec33-kube-api-access-tw8zr\") pod \"marketplace-operator-79b997595-9v4p4\" (UID: \"c0925ef8-5391-40fa-a9a9-898f5dfdec33\") " pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.126852 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0925ef8-5391-40fa-a9a9-898f5dfdec33-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9v4p4\" (UID: \"c0925ef8-5391-40fa-a9a9-898f5dfdec33\") " pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.228509 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8zr\" (UniqueName: \"kubernetes.io/projected/c0925ef8-5391-40fa-a9a9-898f5dfdec33-kube-api-access-tw8zr\") pod \"marketplace-operator-79b997595-9v4p4\" (UID: \"c0925ef8-5391-40fa-a9a9-898f5dfdec33\") " pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.228557 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0925ef8-5391-40fa-a9a9-898f5dfdec33-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9v4p4\" (UID: \"c0925ef8-5391-40fa-a9a9-898f5dfdec33\") " pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.228591 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0925ef8-5391-40fa-a9a9-898f5dfdec33-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9v4p4\" (UID: \"c0925ef8-5391-40fa-a9a9-898f5dfdec33\") " pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.230126 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0925ef8-5391-40fa-a9a9-898f5dfdec33-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9v4p4\" (UID: \"c0925ef8-5391-40fa-a9a9-898f5dfdec33\") " pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.234223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0925ef8-5391-40fa-a9a9-898f5dfdec33-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9v4p4\" (UID: \"c0925ef8-5391-40fa-a9a9-898f5dfdec33\") " pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.245689 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8zr\" (UniqueName: \"kubernetes.io/projected/c0925ef8-5391-40fa-a9a9-898f5dfdec33-kube-api-access-tw8zr\") pod \"marketplace-operator-79b997595-9v4p4\" (UID: \"c0925ef8-5391-40fa-a9a9-898f5dfdec33\") " pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.310188 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.727610 4727 generic.go:334] "Generic (PLEG): container finished" podID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerID="f41dbd47d6539b9eb3becb8d428d718c1430fa8c7b4672590328bf59d5089c18" exitCode=0 Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.727713 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7k77" event={"ID":"d3b0146e-cc5a-48ef-904a-b2d28a6720f3","Type":"ContainerDied","Data":"f41dbd47d6539b9eb3becb8d428d718c1430fa8c7b4672590328bf59d5089c18"} Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.730170 4727 generic.go:334] "Generic (PLEG): container finished" podID="517d22f8-c007-4428-82f1-1fe55445d509" containerID="b5af219de67553d9df393048ab11ead0c71d1c48718358201ea2e50758d11505" exitCode=0 Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.730226 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thvgk" event={"ID":"517d22f8-c007-4428-82f1-1fe55445d509","Type":"ContainerDied","Data":"b5af219de67553d9df393048ab11ead0c71d1c48718358201ea2e50758d11505"} Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.731115 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9v4p4"] Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.732324 4727 generic.go:334] "Generic (PLEG): container finished" podID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerID="52e4d345e54a2ebb19127773b70b7376a769e901f187f0514077d94534751e6e" exitCode=0 Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.732373 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" event={"ID":"521cc0a4-1afa-4ef6-bdd6-37c60f87273f","Type":"ContainerDied","Data":"52e4d345e54a2ebb19127773b70b7376a769e901f187f0514077d94534751e6e"} Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.732399 4727 scope.go:117] "RemoveContainer" containerID="271c82a18c9ea81712e271d706eaac8c5afdc8a0019476be6ae6c9b343efe524" Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.734538 4727 generic.go:334] "Generic (PLEG): container finished" podID="2484515c-1846-4e63-9747-bc6dc81a574c" containerID="3fed7902c2e1129bcc902f5aa5cac74edb31cf0b54952dffbeca558e9dc38656" exitCode=0 Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.734593 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v95" event={"ID":"2484515c-1846-4e63-9747-bc6dc81a574c","Type":"ContainerDied","Data":"3fed7902c2e1129bcc902f5aa5cac74edb31cf0b54952dffbeca558e9dc38656"} Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.736573 4727 generic.go:334] "Generic (PLEG): container finished" podID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerID="092365009e80e94fe7f922e5b01c4091eecb273e99cce46e57f871abb511385d" exitCode=0 Dec 10 14:39:11 crc kubenswrapper[4727]: I1210 14:39:11.736596 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qn5hx" event={"ID":"7f245f78-d777-49e5-8bf1-69a6bb04943b","Type":"ContainerDied","Data":"092365009e80e94fe7f922e5b01c4091eecb273e99cce46e57f871abb511385d"} Dec 10 14:39:11 crc kubenswrapper[4727]: W1210 14:39:11.760096 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0925ef8_5391_40fa_a9a9_898f5dfdec33.slice/crio-638db55cce196a7de55a033b4831a8a267e1a2a8531fb8ea29122b0cd63f8b62 WatchSource:0}: Error finding container 638db55cce196a7de55a033b4831a8a267e1a2a8531fb8ea29122b0cd63f8b62: Status 404 returned error can't find the container with id 638db55cce196a7de55a033b4831a8a267e1a2a8531fb8ea29122b0cd63f8b62 Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.016288 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.141662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-operator-metrics\") pod \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.141720 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69hl4\" (UniqueName: \"kubernetes.io/projected/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-kube-api-access-69hl4\") pod \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.141797 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-trusted-ca\") pod \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\" (UID: \"521cc0a4-1afa-4ef6-bdd6-37c60f87273f\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.142672 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "521cc0a4-1afa-4ef6-bdd6-37c60f87273f" (UID: "521cc0a4-1afa-4ef6-bdd6-37c60f87273f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.150649 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "521cc0a4-1afa-4ef6-bdd6-37c60f87273f" (UID: "521cc0a4-1afa-4ef6-bdd6-37c60f87273f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.156928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-kube-api-access-69hl4" (OuterVolumeSpecName: "kube-api-access-69hl4") pod "521cc0a4-1afa-4ef6-bdd6-37c60f87273f" (UID: "521cc0a4-1afa-4ef6-bdd6-37c60f87273f"). InnerVolumeSpecName "kube-api-access-69hl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.244655 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.244696 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.244706 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69hl4\" (UniqueName: \"kubernetes.io/projected/521cc0a4-1afa-4ef6-bdd6-37c60f87273f-kube-api-access-69hl4\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.345747 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.356446 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.363723 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.393532 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.447336 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-utilities\") pod \"517d22f8-c007-4428-82f1-1fe55445d509\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.447422 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmqgx\" (UniqueName: \"kubernetes.io/projected/7f245f78-d777-49e5-8bf1-69a6bb04943b-kube-api-access-pmqgx\") pod \"7f245f78-d777-49e5-8bf1-69a6bb04943b\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.447467 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjdfb\" (UniqueName: \"kubernetes.io/projected/517d22f8-c007-4428-82f1-1fe55445d509-kube-api-access-fjdfb\") pod \"517d22f8-c007-4428-82f1-1fe55445d509\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.447521 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-catalog-content\") pod \"7f245f78-d777-49e5-8bf1-69a6bb04943b\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.447551 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-catalog-content\") pod \"517d22f8-c007-4428-82f1-1fe55445d509\" (UID: \"517d22f8-c007-4428-82f1-1fe55445d509\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.447606 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-utilities\") pod \"7f245f78-d777-49e5-8bf1-69a6bb04943b\" (UID: \"7f245f78-d777-49e5-8bf1-69a6bb04943b\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.448204 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-utilities" (OuterVolumeSpecName: "utilities") pod "517d22f8-c007-4428-82f1-1fe55445d509" (UID: "517d22f8-c007-4428-82f1-1fe55445d509"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.448708 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-utilities" (OuterVolumeSpecName: "utilities") pod "7f245f78-d777-49e5-8bf1-69a6bb04943b" (UID: "7f245f78-d777-49e5-8bf1-69a6bb04943b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.453268 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f245f78-d777-49e5-8bf1-69a6bb04943b-kube-api-access-pmqgx" (OuterVolumeSpecName: "kube-api-access-pmqgx") pod "7f245f78-d777-49e5-8bf1-69a6bb04943b" (UID: "7f245f78-d777-49e5-8bf1-69a6bb04943b"). InnerVolumeSpecName "kube-api-access-pmqgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.458520 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517d22f8-c007-4428-82f1-1fe55445d509-kube-api-access-fjdfb" (OuterVolumeSpecName: "kube-api-access-fjdfb") pod "517d22f8-c007-4428-82f1-1fe55445d509" (UID: "517d22f8-c007-4428-82f1-1fe55445d509"). InnerVolumeSpecName "kube-api-access-fjdfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.470752 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "517d22f8-c007-4428-82f1-1fe55445d509" (UID: "517d22f8-c007-4428-82f1-1fe55445d509"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.548565 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-catalog-content\") pod \"2484515c-1846-4e63-9747-bc6dc81a574c\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.548642 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4vqw\" (UniqueName: \"kubernetes.io/projected/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-kube-api-access-n4vqw\") pod \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.548671 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-catalog-content\") pod \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.548700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-utilities\") pod \"2484515c-1846-4e63-9747-bc6dc81a574c\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.548809 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc5m8\" (UniqueName: \"kubernetes.io/projected/2484515c-1846-4e63-9747-bc6dc81a574c-kube-api-access-nc5m8\") pod \"2484515c-1846-4e63-9747-bc6dc81a574c\" (UID: \"2484515c-1846-4e63-9747-bc6dc81a574c\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.548852 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-utilities\") pod \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\" (UID: \"d3b0146e-cc5a-48ef-904a-b2d28a6720f3\") " Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.549148 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.549171 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.549182 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmqgx\" (UniqueName: \"kubernetes.io/projected/7f245f78-d777-49e5-8bf1-69a6bb04943b-kube-api-access-pmqgx\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.549197 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjdfb\" (UniqueName: \"kubernetes.io/projected/517d22f8-c007-4428-82f1-1fe55445d509-kube-api-access-fjdfb\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.549208 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517d22f8-c007-4428-82f1-1fe55445d509-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.549600 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-utilities" (OuterVolumeSpecName: "utilities") pod "2484515c-1846-4e63-9747-bc6dc81a574c" (UID: "2484515c-1846-4e63-9747-bc6dc81a574c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.549715 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-utilities" (OuterVolumeSpecName: "utilities") pod "d3b0146e-cc5a-48ef-904a-b2d28a6720f3" (UID: "d3b0146e-cc5a-48ef-904a-b2d28a6720f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.555584 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2484515c-1846-4e63-9747-bc6dc81a574c-kube-api-access-nc5m8" (OuterVolumeSpecName: "kube-api-access-nc5m8") pod "2484515c-1846-4e63-9747-bc6dc81a574c" (UID: "2484515c-1846-4e63-9747-bc6dc81a574c"). InnerVolumeSpecName "kube-api-access-nc5m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.555636 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-kube-api-access-n4vqw" (OuterVolumeSpecName: "kube-api-access-n4vqw") pod "d3b0146e-cc5a-48ef-904a-b2d28a6720f3" (UID: "d3b0146e-cc5a-48ef-904a-b2d28a6720f3"). InnerVolumeSpecName "kube-api-access-n4vqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.574649 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f245f78-d777-49e5-8bf1-69a6bb04943b" (UID: "7f245f78-d777-49e5-8bf1-69a6bb04943b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.608226 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3b0146e-cc5a-48ef-904a-b2d28a6720f3" (UID: "d3b0146e-cc5a-48ef-904a-b2d28a6720f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.612426 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2484515c-1846-4e63-9747-bc6dc81a574c" (UID: "2484515c-1846-4e63-9747-bc6dc81a574c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.650402 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.650446 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4vqw\" (UniqueName: \"kubernetes.io/projected/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-kube-api-access-n4vqw\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.650461 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2484515c-1846-4e63-9747-bc6dc81a574c-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.650469 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.650478 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f245f78-d777-49e5-8bf1-69a6bb04943b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.650486 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc5m8\" (UniqueName: \"kubernetes.io/projected/2484515c-1846-4e63-9747-bc6dc81a574c-kube-api-access-nc5m8\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.650494 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b0146e-cc5a-48ef-904a-b2d28a6720f3-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.745340 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thvgk" event={"ID":"517d22f8-c007-4428-82f1-1fe55445d509","Type":"ContainerDied","Data":"b68889f4ad8cea047ad882f1fc4477fb9e6ad8e3a7b973f1c9133253ba6e8da4"} Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.745398 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thvgk" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.745400 4727 scope.go:117] "RemoveContainer" containerID="b5af219de67553d9df393048ab11ead0c71d1c48718358201ea2e50758d11505" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.755161 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.755170 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6w2jq" event={"ID":"521cc0a4-1afa-4ef6-bdd6-37c60f87273f","Type":"ContainerDied","Data":"966cd666910bd7b6520c8e2ff66d3a121971d9a91099799f0ec8a266cfe16b80"} Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.759713 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v95" event={"ID":"2484515c-1846-4e63-9747-bc6dc81a574c","Type":"ContainerDied","Data":"9c5622da37dcf0695e48e47a1f2958b55f5b1b63aff2ae011a4cb46074050878"} Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.759748 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5v95" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.767590 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" event={"ID":"c0925ef8-5391-40fa-a9a9-898f5dfdec33","Type":"ContainerStarted","Data":"840ed6a786299bc7989cfe4c8792977495c7f7589c9475b407f053a8db6382c1"} Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.767678 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.767696 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" event={"ID":"c0925ef8-5391-40fa-a9a9-898f5dfdec33","Type":"ContainerStarted","Data":"638db55cce196a7de55a033b4831a8a267e1a2a8531fb8ea29122b0cd63f8b62"} Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.771454 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-thvgk"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.771559 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.775235 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qn5hx" event={"ID":"7f245f78-d777-49e5-8bf1-69a6bb04943b","Type":"ContainerDied","Data":"1ba0fc1a0906cc318120975e44091d59168007c5b896f0cf99ac693cafa20d15"} Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.775339 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qn5hx" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.776171 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-thvgk"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.777976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7k77" event={"ID":"d3b0146e-cc5a-48ef-904a-b2d28a6720f3","Type":"ContainerDied","Data":"28a0a8696c8faa89efe7cd83544dfd4d2c9ecf26ae15c363c1268984f19d5e30"} Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.778100 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7k77" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.782853 4727 scope.go:117] "RemoveContainer" containerID="b7472b253f28593222f69ba6e121df90ced7362caeae70ec916dea0c0dc9aa2a" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.782990 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6w2jq"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.789154 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6w2jq"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.808275 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9v4p4" podStartSLOduration=2.808251631 podStartE2EDuration="2.808251631s" podCreationTimestamp="2025-12-10 14:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:39:12.806164824 +0000 UTC m=+457.000939366" watchObservedRunningTime="2025-12-10 14:39:12.808251631 +0000 UTC m=+457.003026173" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.812597 4727 scope.go:117] "RemoveContainer" containerID="4da39b19c6fcf3cd388a534956b70c4187f8e3f8bd0f237b77d2b8a791dc123c" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.858495 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qn5hx"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.864466 4727 scope.go:117] "RemoveContainer" containerID="52e4d345e54a2ebb19127773b70b7376a769e901f187f0514077d94534751e6e" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.866636 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qn5hx"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.871988 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5v95"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.881507 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p5v95"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.884981 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g7k77"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.887258 4727 scope.go:117] "RemoveContainer" containerID="3fed7902c2e1129bcc902f5aa5cac74edb31cf0b54952dffbeca558e9dc38656" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.887809 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g7k77"] Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.906116 4727 scope.go:117] "RemoveContainer" containerID="3cf74f698f8d36052f73824074cf75e74267f3ae2756db9e8fceb2c78c632135" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.921110 4727 scope.go:117] "RemoveContainer" containerID="4caecf48c0a7a1a8cf7dfeaa150d06e438708679329263aaf510b2ca053a510e" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.935445 4727 scope.go:117] "RemoveContainer" containerID="092365009e80e94fe7f922e5b01c4091eecb273e99cce46e57f871abb511385d" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.947618 4727 scope.go:117] "RemoveContainer" containerID="a3f43c89f04e05e3c4fb1723df7bfac6d2cf2308262d4377cfd218c4b8219df0" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.961318 4727 scope.go:117] "RemoveContainer" containerID="631422ae118bd8d49ac1116a2d6765d6f2a67870f791ba62045401382771da11" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.974007 4727 scope.go:117] "RemoveContainer" containerID="f41dbd47d6539b9eb3becb8d428d718c1430fa8c7b4672590328bf59d5089c18" Dec 10 14:39:12 crc kubenswrapper[4727]: I1210 14:39:12.990938 4727 scope.go:117] "RemoveContainer" containerID="742fe76861cc16f812ceb6c81196291cce987fcec2ca68f92298517e086c7e67" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.008894 4727 scope.go:117] "RemoveContainer" containerID="4de3f7a881f15b06f689bf45a69d548cfe92ad861f3d7f2e208e923fd8ccc3e1" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363334 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bf29c"] Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363541 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517d22f8-c007-4428-82f1-1fe55445d509" containerName="extract-utilities" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363554 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="517d22f8-c007-4428-82f1-1fe55445d509" containerName="extract-utilities" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363562 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517d22f8-c007-4428-82f1-1fe55445d509" containerName="extract-content" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363568 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="517d22f8-c007-4428-82f1-1fe55445d509" containerName="extract-content" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363577 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerName="extract-content" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363584 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerName="extract-content" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363597 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363605 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363613 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerName="extract-content" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363619 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerName="extract-content" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363629 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517d22f8-c007-4428-82f1-1fe55445d509" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363634 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="517d22f8-c007-4428-82f1-1fe55445d509" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363642 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363647 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363657 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerName="extract-utilities" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363662 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerName="extract-utilities" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363671 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerName="extract-utilities" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363676 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerName="extract-utilities" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363683 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363688 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363695 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" containerName="extract-utilities" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363703 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" containerName="extract-utilities" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363711 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" containerName="extract-content" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363717 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" containerName="extract-content" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363724 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363729 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363829 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363842 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363849 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363856 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="517d22f8-c007-4428-82f1-1fe55445d509" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363866 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363873 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" containerName="registry-server" Dec 10 14:39:13 crc kubenswrapper[4727]: E1210 14:39:13.363985 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.363994 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" containerName="marketplace-operator" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.364596 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.369557 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.393568 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bf29c"] Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.595765 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-utilities\") pod \"community-operators-bf29c\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.595865 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-catalog-content\") pod \"community-operators-bf29c\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.595892 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mqw7\" (UniqueName: \"kubernetes.io/projected/da553bbf-7e26-45f1-80d9-aed40900c3e4-kube-api-access-4mqw7\") pod \"community-operators-bf29c\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.696619 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-utilities\") pod \"community-operators-bf29c\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.696703 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-catalog-content\") pod \"community-operators-bf29c\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.696721 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mqw7\" (UniqueName: \"kubernetes.io/projected/da553bbf-7e26-45f1-80d9-aed40900c3e4-kube-api-access-4mqw7\") pod \"community-operators-bf29c\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.697622 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-catalog-content\") pod \"community-operators-bf29c\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.697643 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-utilities\") pod \"community-operators-bf29c\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.715887 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mqw7\" (UniqueName: \"kubernetes.io/projected/da553bbf-7e26-45f1-80d9-aed40900c3e4-kube-api-access-4mqw7\") pod \"community-operators-bf29c\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:13 crc kubenswrapper[4727]: I1210 14:39:13.986624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:14 crc kubenswrapper[4727]: I1210 14:39:14.519518 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bf29c"] Dec 10 14:39:14 crc kubenswrapper[4727]: W1210 14:39:14.523154 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda553bbf_7e26_45f1_80d9_aed40900c3e4.slice/crio-a0db79db3c3f5337b48229d880b0fb2deb438ffa7af6d4cff2e40be39922ecea WatchSource:0}: Error finding container a0db79db3c3f5337b48229d880b0fb2deb438ffa7af6d4cff2e40be39922ecea: Status 404 returned error can't find the container with id a0db79db3c3f5337b48229d880b0fb2deb438ffa7af6d4cff2e40be39922ecea Dec 10 14:39:14 crc kubenswrapper[4727]: I1210 14:39:14.572345 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2484515c-1846-4e63-9747-bc6dc81a574c" path="/var/lib/kubelet/pods/2484515c-1846-4e63-9747-bc6dc81a574c/volumes" Dec 10 14:39:14 crc kubenswrapper[4727]: I1210 14:39:14.573057 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="517d22f8-c007-4428-82f1-1fe55445d509" path="/var/lib/kubelet/pods/517d22f8-c007-4428-82f1-1fe55445d509/volumes" Dec 10 14:39:14 crc kubenswrapper[4727]: I1210 14:39:14.573749 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521cc0a4-1afa-4ef6-bdd6-37c60f87273f" path="/var/lib/kubelet/pods/521cc0a4-1afa-4ef6-bdd6-37c60f87273f/volumes" Dec 10 14:39:14 crc kubenswrapper[4727]: I1210 14:39:14.574891 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f245f78-d777-49e5-8bf1-69a6bb04943b" path="/var/lib/kubelet/pods/7f245f78-d777-49e5-8bf1-69a6bb04943b/volumes" Dec 10 14:39:14 crc kubenswrapper[4727]: I1210 14:39:14.575664 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3b0146e-cc5a-48ef-904a-b2d28a6720f3" path="/var/lib/kubelet/pods/d3b0146e-cc5a-48ef-904a-b2d28a6720f3/volumes" Dec 10 14:39:14 crc kubenswrapper[4727]: I1210 14:39:14.803729 4727 generic.go:334] "Generic (PLEG): container finished" podID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerID="7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe" exitCode=0 Dec 10 14:39:14 crc kubenswrapper[4727]: I1210 14:39:14.803847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf29c" event={"ID":"da553bbf-7e26-45f1-80d9-aed40900c3e4","Type":"ContainerDied","Data":"7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe"} Dec 10 14:39:14 crc kubenswrapper[4727]: I1210 14:39:14.804200 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf29c" event={"ID":"da553bbf-7e26-45f1-80d9-aed40900c3e4","Type":"ContainerStarted","Data":"a0db79db3c3f5337b48229d880b0fb2deb438ffa7af6d4cff2e40be39922ecea"} Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.160190 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n6w48"] Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.161508 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.169782 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.171845 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6w48"] Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.350883 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2cc6852-6459-4018-86cf-8bec07f223d7-catalog-content\") pod \"redhat-marketplace-n6w48\" (UID: \"b2cc6852-6459-4018-86cf-8bec07f223d7\") " pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.350994 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2cc6852-6459-4018-86cf-8bec07f223d7-utilities\") pod \"redhat-marketplace-n6w48\" (UID: \"b2cc6852-6459-4018-86cf-8bec07f223d7\") " pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.351057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4v75\" (UniqueName: \"kubernetes.io/projected/b2cc6852-6459-4018-86cf-8bec07f223d7-kube-api-access-s4v75\") pod \"redhat-marketplace-n6w48\" (UID: \"b2cc6852-6459-4018-86cf-8bec07f223d7\") " pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.452231 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2cc6852-6459-4018-86cf-8bec07f223d7-catalog-content\") pod \"redhat-marketplace-n6w48\" (UID: \"b2cc6852-6459-4018-86cf-8bec07f223d7\") " pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.452322 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2cc6852-6459-4018-86cf-8bec07f223d7-utilities\") pod \"redhat-marketplace-n6w48\" (UID: \"b2cc6852-6459-4018-86cf-8bec07f223d7\") " pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.452345 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4v75\" (UniqueName: \"kubernetes.io/projected/b2cc6852-6459-4018-86cf-8bec07f223d7-kube-api-access-s4v75\") pod \"redhat-marketplace-n6w48\" (UID: \"b2cc6852-6459-4018-86cf-8bec07f223d7\") " pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.453024 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2cc6852-6459-4018-86cf-8bec07f223d7-utilities\") pod \"redhat-marketplace-n6w48\" (UID: \"b2cc6852-6459-4018-86cf-8bec07f223d7\") " pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.453023 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2cc6852-6459-4018-86cf-8bec07f223d7-catalog-content\") pod \"redhat-marketplace-n6w48\" (UID: \"b2cc6852-6459-4018-86cf-8bec07f223d7\") " pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.474997 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4v75\" (UniqueName: \"kubernetes.io/projected/b2cc6852-6459-4018-86cf-8bec07f223d7-kube-api-access-s4v75\") pod \"redhat-marketplace-n6w48\" (UID: \"b2cc6852-6459-4018-86cf-8bec07f223d7\") " pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.484632 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.772010 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-72z9s"] Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.773772 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.778946 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.786376 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72z9s"] Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.946513 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf4b8\" (UniqueName: \"kubernetes.io/projected/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-kube-api-access-jf4b8\") pod \"redhat-operators-72z9s\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.946606 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-catalog-content\") pod \"redhat-operators-72z9s\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:15 crc kubenswrapper[4727]: I1210 14:39:15.946638 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-utilities\") pod \"redhat-operators-72z9s\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.048628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-catalog-content\") pod \"redhat-operators-72z9s\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.048694 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-utilities\") pod \"redhat-operators-72z9s\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.048751 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf4b8\" (UniqueName: \"kubernetes.io/projected/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-kube-api-access-jf4b8\") pod \"redhat-operators-72z9s\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.049699 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-utilities\") pod \"redhat-operators-72z9s\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.049767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-catalog-content\") pod \"redhat-operators-72z9s\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.084865 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf4b8\" (UniqueName: \"kubernetes.io/projected/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-kube-api-access-jf4b8\") pod \"redhat-operators-72z9s\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.099698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6w48"] Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.099950 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:16 crc kubenswrapper[4727]: W1210 14:39:16.120191 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2cc6852_6459_4018_86cf_8bec07f223d7.slice/crio-92ad02e95932659e198598ecd4dbf3843536bb0640e2132af3e67a2d715ef6fa WatchSource:0}: Error finding container 92ad02e95932659e198598ecd4dbf3843536bb0640e2132af3e67a2d715ef6fa: Status 404 returned error can't find the container with id 92ad02e95932659e198598ecd4dbf3843536bb0640e2132af3e67a2d715ef6fa Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.522120 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72z9s"] Dec 10 14:39:16 crc kubenswrapper[4727]: W1210 14:39:16.522820 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3cde39b_0b7a_4fdc_83f7_a213df953bfb.slice/crio-37cffbe9788ccff4731be38c02531e63caeef611b5a51b0f7a32aad33a84ddb9 WatchSource:0}: Error finding container 37cffbe9788ccff4731be38c02531e63caeef611b5a51b0f7a32aad33a84ddb9: Status 404 returned error can't find the container with id 37cffbe9788ccff4731be38c02531e63caeef611b5a51b0f7a32aad33a84ddb9 Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.818438 4727 generic.go:334] "Generic (PLEG): container finished" podID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerID="de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663" exitCode=0 Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.818535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf29c" event={"ID":"da553bbf-7e26-45f1-80d9-aed40900c3e4","Type":"ContainerDied","Data":"de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663"} Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.825639 4727 generic.go:334] "Generic (PLEG): container finished" podID="b2cc6852-6459-4018-86cf-8bec07f223d7" containerID="ba115b41f5519a36d1cfbd43c38cd4d2057d7ec63a52b4f0d88d5ed23d5239fa" exitCode=0 Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.825754 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6w48" event={"ID":"b2cc6852-6459-4018-86cf-8bec07f223d7","Type":"ContainerDied","Data":"ba115b41f5519a36d1cfbd43c38cd4d2057d7ec63a52b4f0d88d5ed23d5239fa"} Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.825790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6w48" event={"ID":"b2cc6852-6459-4018-86cf-8bec07f223d7","Type":"ContainerStarted","Data":"92ad02e95932659e198598ecd4dbf3843536bb0640e2132af3e67a2d715ef6fa"} Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.833089 4727 generic.go:334] "Generic (PLEG): container finished" podID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerID="f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd" exitCode=0 Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.833150 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72z9s" event={"ID":"d3cde39b-0b7a-4fdc-83f7-a213df953bfb","Type":"ContainerDied","Data":"f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd"} Dec 10 14:39:16 crc kubenswrapper[4727]: I1210 14:39:16.833193 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72z9s" event={"ID":"d3cde39b-0b7a-4fdc-83f7-a213df953bfb","Type":"ContainerStarted","Data":"37cffbe9788ccff4731be38c02531e63caeef611b5a51b0f7a32aad33a84ddb9"} Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.556270 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zf42t"] Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.558258 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.560450 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.576413 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zf42t"] Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.673083 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-utilities\") pod \"certified-operators-zf42t\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.673210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmnf\" (UniqueName: \"kubernetes.io/projected/8089752a-539b-4f05-81d6-ef383b753227-kube-api-access-glmnf\") pod \"certified-operators-zf42t\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.673285 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-catalog-content\") pod \"certified-operators-zf42t\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.774570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-utilities\") pod \"certified-operators-zf42t\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.774642 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmnf\" (UniqueName: \"kubernetes.io/projected/8089752a-539b-4f05-81d6-ef383b753227-kube-api-access-glmnf\") pod \"certified-operators-zf42t\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.774715 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-catalog-content\") pod \"certified-operators-zf42t\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.775493 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-utilities\") pod \"certified-operators-zf42t\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.775543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-catalog-content\") pod \"certified-operators-zf42t\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.796672 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmnf\" (UniqueName: \"kubernetes.io/projected/8089752a-539b-4f05-81d6-ef383b753227-kube-api-access-glmnf\") pod \"certified-operators-zf42t\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.842038 4727 generic.go:334] "Generic (PLEG): container finished" podID="b2cc6852-6459-4018-86cf-8bec07f223d7" containerID="4820c764909cce07c0611f45166980032a68692dd40cf826d4e844b8721878cc" exitCode=0 Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.842124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6w48" event={"ID":"b2cc6852-6459-4018-86cf-8bec07f223d7","Type":"ContainerDied","Data":"4820c764909cce07c0611f45166980032a68692dd40cf826d4e844b8721878cc"} Dec 10 14:39:17 crc kubenswrapper[4727]: I1210 14:39:17.898024 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:18 crc kubenswrapper[4727]: I1210 14:39:18.315414 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zf42t"] Dec 10 14:39:18 crc kubenswrapper[4727]: I1210 14:39:18.853646 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf29c" event={"ID":"da553bbf-7e26-45f1-80d9-aed40900c3e4","Type":"ContainerStarted","Data":"810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8"} Dec 10 14:39:18 crc kubenswrapper[4727]: I1210 14:39:18.861493 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6w48" event={"ID":"b2cc6852-6459-4018-86cf-8bec07f223d7","Type":"ContainerStarted","Data":"41e13f11b2dd26bf7d5e6f6a1878b1eb75a7325e48370f02704c5ef2c50570e0"} Dec 10 14:39:18 crc kubenswrapper[4727]: I1210 14:39:18.863615 4727 generic.go:334] "Generic (PLEG): container finished" podID="8089752a-539b-4f05-81d6-ef383b753227" containerID="0c373e406dcb4de03c5abe950920e7bc30c36af6809476cc922c052470553642" exitCode=0 Dec 10 14:39:18 crc kubenswrapper[4727]: I1210 14:39:18.863647 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf42t" event={"ID":"8089752a-539b-4f05-81d6-ef383b753227","Type":"ContainerDied","Data":"0c373e406dcb4de03c5abe950920e7bc30c36af6809476cc922c052470553642"} Dec 10 14:39:18 crc kubenswrapper[4727]: I1210 14:39:18.863664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf42t" event={"ID":"8089752a-539b-4f05-81d6-ef383b753227","Type":"ContainerStarted","Data":"2b6c02d9eea69f5e3e0c2559c00c2afe2269e80f3283c82d8a6395c4cab72862"} Dec 10 14:39:18 crc kubenswrapper[4727]: I1210 14:39:18.889646 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bf29c" podStartSLOduration=2.867059988 podStartE2EDuration="5.889626525s" podCreationTimestamp="2025-12-10 14:39:13 +0000 UTC" firstStartedPulling="2025-12-10 14:39:14.805330425 +0000 UTC m=+459.000104967" lastFinishedPulling="2025-12-10 14:39:17.827896962 +0000 UTC m=+462.022671504" observedRunningTime="2025-12-10 14:39:18.887551458 +0000 UTC m=+463.082326000" watchObservedRunningTime="2025-12-10 14:39:18.889626525 +0000 UTC m=+463.084401057" Dec 10 14:39:18 crc kubenswrapper[4727]: I1210 14:39:18.909245 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n6w48" podStartSLOduration=2.434708833 podStartE2EDuration="3.909224255s" podCreationTimestamp="2025-12-10 14:39:15 +0000 UTC" firstStartedPulling="2025-12-10 14:39:16.827614492 +0000 UTC m=+461.022389034" lastFinishedPulling="2025-12-10 14:39:18.302129914 +0000 UTC m=+462.496904456" observedRunningTime="2025-12-10 14:39:18.908197656 +0000 UTC m=+463.102972208" watchObservedRunningTime="2025-12-10 14:39:18.909224255 +0000 UTC m=+463.103998797" Dec 10 14:39:20 crc kubenswrapper[4727]: I1210 14:39:20.882015 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf42t" event={"ID":"8089752a-539b-4f05-81d6-ef383b753227","Type":"ContainerDied","Data":"0e7ff727711d81f71d11467f87790881f6b385dcaee1f28a8c38c409ae2fa601"} Dec 10 14:39:20 crc kubenswrapper[4727]: I1210 14:39:20.881990 4727 generic.go:334] "Generic (PLEG): container finished" podID="8089752a-539b-4f05-81d6-ef383b753227" containerID="0e7ff727711d81f71d11467f87790881f6b385dcaee1f28a8c38c409ae2fa601" exitCode=0 Dec 10 14:39:23 crc kubenswrapper[4727]: I1210 14:39:23.987557 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:23 crc kubenswrapper[4727]: I1210 14:39:23.988093 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:24 crc kubenswrapper[4727]: I1210 14:39:24.051600 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:24 crc kubenswrapper[4727]: I1210 14:39:24.964502 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:39:25 crc kubenswrapper[4727]: I1210 14:39:25.484858 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:25 crc kubenswrapper[4727]: I1210 14:39:25.484925 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:25 crc kubenswrapper[4727]: I1210 14:39:25.516349 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" podUID="2eeb0c6b-fae8-47f7-91d4-42af15045dfe" containerName="registry" containerID="cri-o://22753eb65466a644118b319b4bb0574cedc21cacb390a72561e89e5922a187a2" gracePeriod=30 Dec 10 14:39:25 crc kubenswrapper[4727]: I1210 14:39:25.530728 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:27 crc kubenswrapper[4727]: I1210 14:39:27.093764 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n6w48" Dec 10 14:39:27 crc kubenswrapper[4727]: I1210 14:39:27.384778 4727 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-n87wt container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Dec 10 14:39:27 crc kubenswrapper[4727]: I1210 14:39:27.384885 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" podUID="2eeb0c6b-fae8-47f7-91d4-42af15045dfe" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.043880 4727 generic.go:334] "Generic (PLEG): container finished" podID="2eeb0c6b-fae8-47f7-91d4-42af15045dfe" containerID="22753eb65466a644118b319b4bb0574cedc21cacb390a72561e89e5922a187a2" exitCode=0 Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.043945 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" event={"ID":"2eeb0c6b-fae8-47f7-91d4-42af15045dfe","Type":"ContainerDied","Data":"22753eb65466a644118b319b4bb0574cedc21cacb390a72561e89e5922a187a2"} Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.554133 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.707241 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-trusted-ca\") pod \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.707316 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7fpm\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-kube-api-access-q7fpm\") pod \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.707354 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-tls\") pod \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.707536 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.707627 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-bound-sa-token\") pod \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.707667 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-installation-pull-secrets\") pod \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.707691 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-certificates\") pod \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.707727 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-ca-trust-extracted\") pod \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\" (UID: \"2eeb0c6b-fae8-47f7-91d4-42af15045dfe\") " Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.709104 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2eeb0c6b-fae8-47f7-91d4-42af15045dfe" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.711644 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2eeb0c6b-fae8-47f7-91d4-42af15045dfe" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.717091 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2eeb0c6b-fae8-47f7-91d4-42af15045dfe" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.717119 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2eeb0c6b-fae8-47f7-91d4-42af15045dfe" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.717386 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2eeb0c6b-fae8-47f7-91d4-42af15045dfe" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.728382 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "2eeb0c6b-fae8-47f7-91d4-42af15045dfe" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.732239 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2eeb0c6b-fae8-47f7-91d4-42af15045dfe" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.737946 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-kube-api-access-q7fpm" (OuterVolumeSpecName: "kube-api-access-q7fpm") pod "2eeb0c6b-fae8-47f7-91d4-42af15045dfe" (UID: "2eeb0c6b-fae8-47f7-91d4-42af15045dfe"). InnerVolumeSpecName "kube-api-access-q7fpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.808865 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.808925 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7fpm\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-kube-api-access-q7fpm\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.808941 4727 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.808954 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.808969 4727 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.808981 4727 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:28 crc kubenswrapper[4727]: I1210 14:39:28.808992 4727 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2eeb0c6b-fae8-47f7-91d4-42af15045dfe-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:29 crc kubenswrapper[4727]: I1210 14:39:29.051547 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72z9s" event={"ID":"d3cde39b-0b7a-4fdc-83f7-a213df953bfb","Type":"ContainerStarted","Data":"905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322"} Dec 10 14:39:29 crc kubenswrapper[4727]: I1210 14:39:29.054317 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf42t" event={"ID":"8089752a-539b-4f05-81d6-ef383b753227","Type":"ContainerStarted","Data":"101502e78d036f6fa46932b317e3f33e89e5d40f792629350a02eeffa7c68f2f"} Dec 10 14:39:29 crc kubenswrapper[4727]: I1210 14:39:29.055988 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" event={"ID":"2eeb0c6b-fae8-47f7-91d4-42af15045dfe","Type":"ContainerDied","Data":"edf26df08a80e4e1e4478f8ba9c614ecdca2e58249dfb3ae4d6bb421eed1eed3"} Dec 10 14:39:29 crc kubenswrapper[4727]: I1210 14:39:29.056021 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n87wt" Dec 10 14:39:29 crc kubenswrapper[4727]: I1210 14:39:29.056060 4727 scope.go:117] "RemoveContainer" containerID="22753eb65466a644118b319b4bb0574cedc21cacb390a72561e89e5922a187a2" Dec 10 14:39:29 crc kubenswrapper[4727]: I1210 14:39:29.091626 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n87wt"] Dec 10 14:39:29 crc kubenswrapper[4727]: I1210 14:39:29.094803 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n87wt"] Dec 10 14:39:29 crc kubenswrapper[4727]: I1210 14:39:29.123267 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zf42t" podStartSLOduration=2.737101024 podStartE2EDuration="12.123232348s" podCreationTimestamp="2025-12-10 14:39:17 +0000 UTC" firstStartedPulling="2025-12-10 14:39:18.866762755 +0000 UTC m=+463.061537287" lastFinishedPulling="2025-12-10 14:39:28.252894069 +0000 UTC m=+472.447668611" observedRunningTime="2025-12-10 14:39:29.122401955 +0000 UTC m=+473.317176497" watchObservedRunningTime="2025-12-10 14:39:29.123232348 +0000 UTC m=+473.318006890" Dec 10 14:39:30 crc kubenswrapper[4727]: I1210 14:39:30.067107 4727 generic.go:334] "Generic (PLEG): container finished" podID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerID="905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322" exitCode=0 Dec 10 14:39:30 crc kubenswrapper[4727]: I1210 14:39:30.067227 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72z9s" event={"ID":"d3cde39b-0b7a-4fdc-83f7-a213df953bfb","Type":"ContainerDied","Data":"905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322"} Dec 10 14:39:30 crc kubenswrapper[4727]: I1210 14:39:30.577085 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eeb0c6b-fae8-47f7-91d4-42af15045dfe" path="/var/lib/kubelet/pods/2eeb0c6b-fae8-47f7-91d4-42af15045dfe/volumes" Dec 10 14:39:36 crc kubenswrapper[4727]: I1210 14:39:36.114584 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72z9s" event={"ID":"d3cde39b-0b7a-4fdc-83f7-a213df953bfb","Type":"ContainerStarted","Data":"cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae"} Dec 10 14:39:37 crc kubenswrapper[4727]: I1210 14:39:37.898257 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:37 crc kubenswrapper[4727]: I1210 14:39:37.898580 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:37 crc kubenswrapper[4727]: I1210 14:39:37.945782 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:37 crc kubenswrapper[4727]: I1210 14:39:37.966640 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-72z9s" podStartSLOduration=4.604800161 podStartE2EDuration="22.966612401s" podCreationTimestamp="2025-12-10 14:39:15 +0000 UTC" firstStartedPulling="2025-12-10 14:39:16.834721908 +0000 UTC m=+461.029496450" lastFinishedPulling="2025-12-10 14:39:35.196534148 +0000 UTC m=+479.391308690" observedRunningTime="2025-12-10 14:39:37.140025566 +0000 UTC m=+481.334800108" watchObservedRunningTime="2025-12-10 14:39:37.966612401 +0000 UTC m=+482.161386943" Dec 10 14:39:38 crc kubenswrapper[4727]: I1210 14:39:38.166314 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:39:46 crc kubenswrapper[4727]: I1210 14:39:46.100465 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:46 crc kubenswrapper[4727]: I1210 14:39:46.101744 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:46 crc kubenswrapper[4727]: I1210 14:39:46.147988 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:39:46 crc kubenswrapper[4727]: I1210 14:39:46.210997 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 14:41:37 crc kubenswrapper[4727]: I1210 14:41:37.724301 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:41:37 crc kubenswrapper[4727]: I1210 14:41:37.725013 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:42:07 crc kubenswrapper[4727]: I1210 14:42:07.724442 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:42:07 crc kubenswrapper[4727]: I1210 14:42:07.725469 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:42:37 crc kubenswrapper[4727]: I1210 14:42:37.723997 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:42:37 crc kubenswrapper[4727]: I1210 14:42:37.724689 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:42:37 crc kubenswrapper[4727]: I1210 14:42:37.724780 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:42:37 crc kubenswrapper[4727]: I1210 14:42:37.725573 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6962415495a564f00fc2a840efe560b2ce35c3f78b92a0cc2b2c5231d5185282"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:42:37 crc kubenswrapper[4727]: I1210 14:42:37.725636 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://6962415495a564f00fc2a840efe560b2ce35c3f78b92a0cc2b2c5231d5185282" gracePeriod=600 Dec 10 14:42:38 crc kubenswrapper[4727]: I1210 14:42:38.556431 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="6962415495a564f00fc2a840efe560b2ce35c3f78b92a0cc2b2c5231d5185282" exitCode=0 Dec 10 14:42:38 crc kubenswrapper[4727]: I1210 14:42:38.556478 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"6962415495a564f00fc2a840efe560b2ce35c3f78b92a0cc2b2c5231d5185282"} Dec 10 14:42:38 crc kubenswrapper[4727]: I1210 14:42:38.556767 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"cd46d2062fb92e117b59daaca2ef5ffa90b444c25a1b8e3e5c4e2bdf99695cf9"} Dec 10 14:42:38 crc kubenswrapper[4727]: I1210 14:42:38.556805 4727 scope.go:117] "RemoveContainer" containerID="51efd2633926a32a5d87ebb689d788f19863dd8e68c34097ba24f8c6d596b5ea" Dec 10 14:44:41 crc kubenswrapper[4727]: I1210 14:44:41.502168 4727 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.183137 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt"] Dec 10 14:45:00 crc kubenswrapper[4727]: E1210 14:45:00.184179 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eeb0c6b-fae8-47f7-91d4-42af15045dfe" containerName="registry" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.184211 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eeb0c6b-fae8-47f7-91d4-42af15045dfe" containerName="registry" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.184390 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eeb0c6b-fae8-47f7-91d4-42af15045dfe" containerName="registry" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.185218 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.187743 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.188140 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.188274 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt"] Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.284858 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cebada22-c034-440c-af94-882f29f42989-secret-volume\") pod \"collect-profiles-29422965-zvlgt\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.284954 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h9cf\" (UniqueName: \"kubernetes.io/projected/cebada22-c034-440c-af94-882f29f42989-kube-api-access-9h9cf\") pod \"collect-profiles-29422965-zvlgt\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.284974 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cebada22-c034-440c-af94-882f29f42989-config-volume\") pod \"collect-profiles-29422965-zvlgt\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.386280 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cebada22-c034-440c-af94-882f29f42989-secret-volume\") pod \"collect-profiles-29422965-zvlgt\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.386382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h9cf\" (UniqueName: \"kubernetes.io/projected/cebada22-c034-440c-af94-882f29f42989-kube-api-access-9h9cf\") pod \"collect-profiles-29422965-zvlgt\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.386411 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cebada22-c034-440c-af94-882f29f42989-config-volume\") pod \"collect-profiles-29422965-zvlgt\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.387537 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cebada22-c034-440c-af94-882f29f42989-config-volume\") pod \"collect-profiles-29422965-zvlgt\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.392525 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cebada22-c034-440c-af94-882f29f42989-secret-volume\") pod \"collect-profiles-29422965-zvlgt\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.404427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h9cf\" (UniqueName: \"kubernetes.io/projected/cebada22-c034-440c-af94-882f29f42989-kube-api-access-9h9cf\") pod \"collect-profiles-29422965-zvlgt\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.507771 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:00 crc kubenswrapper[4727]: I1210 14:45:00.725571 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt"] Dec 10 14:45:01 crc kubenswrapper[4727]: I1210 14:45:01.446427 4727 generic.go:334] "Generic (PLEG): container finished" podID="cebada22-c034-440c-af94-882f29f42989" containerID="e80c6048c15ace1d18c2716a561110a503ed21b16587f93646d21a09ceabdd6d" exitCode=0 Dec 10 14:45:01 crc kubenswrapper[4727]: I1210 14:45:01.446481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" event={"ID":"cebada22-c034-440c-af94-882f29f42989","Type":"ContainerDied","Data":"e80c6048c15ace1d18c2716a561110a503ed21b16587f93646d21a09ceabdd6d"} Dec 10 14:45:01 crc kubenswrapper[4727]: I1210 14:45:01.446767 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" event={"ID":"cebada22-c034-440c-af94-882f29f42989","Type":"ContainerStarted","Data":"3b738dc3c075f2ba044b0627d702eb572e4323f93f202fb091b41a75de2311eb"} Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.670219 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.816269 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h9cf\" (UniqueName: \"kubernetes.io/projected/cebada22-c034-440c-af94-882f29f42989-kube-api-access-9h9cf\") pod \"cebada22-c034-440c-af94-882f29f42989\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.816330 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cebada22-c034-440c-af94-882f29f42989-secret-volume\") pod \"cebada22-c034-440c-af94-882f29f42989\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.816387 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cebada22-c034-440c-af94-882f29f42989-config-volume\") pod \"cebada22-c034-440c-af94-882f29f42989\" (UID: \"cebada22-c034-440c-af94-882f29f42989\") " Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.817436 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebada22-c034-440c-af94-882f29f42989-config-volume" (OuterVolumeSpecName: "config-volume") pod "cebada22-c034-440c-af94-882f29f42989" (UID: "cebada22-c034-440c-af94-882f29f42989"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.821501 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cebada22-c034-440c-af94-882f29f42989-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cebada22-c034-440c-af94-882f29f42989" (UID: "cebada22-c034-440c-af94-882f29f42989"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.821864 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cebada22-c034-440c-af94-882f29f42989-kube-api-access-9h9cf" (OuterVolumeSpecName: "kube-api-access-9h9cf") pod "cebada22-c034-440c-af94-882f29f42989" (UID: "cebada22-c034-440c-af94-882f29f42989"). InnerVolumeSpecName "kube-api-access-9h9cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.917789 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h9cf\" (UniqueName: \"kubernetes.io/projected/cebada22-c034-440c-af94-882f29f42989-kube-api-access-9h9cf\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.917848 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cebada22-c034-440c-af94-882f29f42989-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:02 crc kubenswrapper[4727]: I1210 14:45:02.917865 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cebada22-c034-440c-af94-882f29f42989-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:03 crc kubenswrapper[4727]: I1210 14:45:03.462485 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" event={"ID":"cebada22-c034-440c-af94-882f29f42989","Type":"ContainerDied","Data":"3b738dc3c075f2ba044b0627d702eb572e4323f93f202fb091b41a75de2311eb"} Dec 10 14:45:03 crc kubenswrapper[4727]: I1210 14:45:03.462528 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt" Dec 10 14:45:03 crc kubenswrapper[4727]: I1210 14:45:03.462971 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b738dc3c075f2ba044b0627d702eb572e4323f93f202fb091b41a75de2311eb" Dec 10 14:45:07 crc kubenswrapper[4727]: I1210 14:45:07.723705 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:45:07 crc kubenswrapper[4727]: I1210 14:45:07.723784 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.499661 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf"] Dec 10 14:45:37 crc kubenswrapper[4727]: E1210 14:45:37.500384 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cebada22-c034-440c-af94-882f29f42989" containerName="collect-profiles" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.500403 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cebada22-c034-440c-af94-882f29f42989" containerName="collect-profiles" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.500561 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cebada22-c034-440c-af94-882f29f42989" containerName="collect-profiles" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.501662 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.524466 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.552662 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf"] Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.696493 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.696546 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.696671 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zm7g\" (UniqueName: \"kubernetes.io/projected/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-kube-api-access-8zm7g\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.724141 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.724238 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.797350 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zm7g\" (UniqueName: \"kubernetes.io/projected/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-kube-api-access-8zm7g\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.797453 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.797491 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.798369 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.798533 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.824164 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zm7g\" (UniqueName: \"kubernetes.io/projected/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-kube-api-access-8zm7g\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:37 crc kubenswrapper[4727]: I1210 14:45:37.843185 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:38 crc kubenswrapper[4727]: I1210 14:45:38.099579 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf"] Dec 10 14:45:38 crc kubenswrapper[4727]: I1210 14:45:38.697385 4727 generic.go:334] "Generic (PLEG): container finished" podID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerID="8b4166d29d24dae30a8dc9a59b6e95b4157be437e6c9ff8713b0375bf9e58046" exitCode=0 Dec 10 14:45:38 crc kubenswrapper[4727]: I1210 14:45:38.697499 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" event={"ID":"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e","Type":"ContainerDied","Data":"8b4166d29d24dae30a8dc9a59b6e95b4157be437e6c9ff8713b0375bf9e58046"} Dec 10 14:45:38 crc kubenswrapper[4727]: I1210 14:45:38.697772 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" event={"ID":"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e","Type":"ContainerStarted","Data":"dc4c9dd0611e900b83a4fcb99c94058e4ea4cddbf3daa4ddb9b87e15269a37b2"} Dec 10 14:45:38 crc kubenswrapper[4727]: I1210 14:45:38.699248 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.718582 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8mtg"] Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.723717 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.733299 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8mtg"] Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.743685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-utilities\") pod \"redhat-operators-z8mtg\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.743790 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgckm\" (UniqueName: \"kubernetes.io/projected/afb59750-e50a-4669-bcdd-8d0584d78f06-kube-api-access-rgckm\") pod \"redhat-operators-z8mtg\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.743834 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-catalog-content\") pod \"redhat-operators-z8mtg\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.888373 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgckm\" (UniqueName: \"kubernetes.io/projected/afb59750-e50a-4669-bcdd-8d0584d78f06-kube-api-access-rgckm\") pod \"redhat-operators-z8mtg\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.888451 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-catalog-content\") pod \"redhat-operators-z8mtg\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.888516 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-utilities\") pod \"redhat-operators-z8mtg\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.889031 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-utilities\") pod \"redhat-operators-z8mtg\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.889335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-catalog-content\") pod \"redhat-operators-z8mtg\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:39 crc kubenswrapper[4727]: I1210 14:45:39.911644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgckm\" (UniqueName: \"kubernetes.io/projected/afb59750-e50a-4669-bcdd-8d0584d78f06-kube-api-access-rgckm\") pod \"redhat-operators-z8mtg\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:40 crc kubenswrapper[4727]: I1210 14:45:40.041238 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:40 crc kubenswrapper[4727]: I1210 14:45:40.412419 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8mtg"] Dec 10 14:45:40 crc kubenswrapper[4727]: I1210 14:45:40.710750 4727 generic.go:334] "Generic (PLEG): container finished" podID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerID="8fe7ee7e2e585e7cb13e886ae2c75e5e04532630402e808b466492ab6e5302da" exitCode=0 Dec 10 14:45:40 crc kubenswrapper[4727]: I1210 14:45:40.710838 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" event={"ID":"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e","Type":"ContainerDied","Data":"8fe7ee7e2e585e7cb13e886ae2c75e5e04532630402e808b466492ab6e5302da"} Dec 10 14:45:40 crc kubenswrapper[4727]: I1210 14:45:40.713548 4727 generic.go:334] "Generic (PLEG): container finished" podID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerID="c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e" exitCode=0 Dec 10 14:45:40 crc kubenswrapper[4727]: I1210 14:45:40.713597 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8mtg" event={"ID":"afb59750-e50a-4669-bcdd-8d0584d78f06","Type":"ContainerDied","Data":"c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e"} Dec 10 14:45:40 crc kubenswrapper[4727]: I1210 14:45:40.713627 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8mtg" event={"ID":"afb59750-e50a-4669-bcdd-8d0584d78f06","Type":"ContainerStarted","Data":"174c32be8a68c5fe2449d6282e61f78927ebd257f7f52fe0842f45d9120a5b94"} Dec 10 14:45:41 crc kubenswrapper[4727]: I1210 14:45:41.722179 4727 generic.go:334] "Generic (PLEG): container finished" podID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerID="e834754f97f4869c261f1cb7376e186a8c6fec70f1fc48256f90d623cfa4ed5e" exitCode=0 Dec 10 14:45:41 crc kubenswrapper[4727]: I1210 14:45:41.722426 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" event={"ID":"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e","Type":"ContainerDied","Data":"e834754f97f4869c261f1cb7376e186a8c6fec70f1fc48256f90d623cfa4ed5e"} Dec 10 14:45:41 crc kubenswrapper[4727]: I1210 14:45:41.726370 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8mtg" event={"ID":"afb59750-e50a-4669-bcdd-8d0584d78f06","Type":"ContainerStarted","Data":"87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341"} Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.263241 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.458333 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zm7g\" (UniqueName: \"kubernetes.io/projected/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-kube-api-access-8zm7g\") pod \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.458499 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-bundle\") pod \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.458528 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-util\") pod \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\" (UID: \"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e\") " Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.461065 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-bundle" (OuterVolumeSpecName: "bundle") pod "3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" (UID: "3a8f90ea-a6d0-4ea4-8573-2ea50493e86e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.472851 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-util" (OuterVolumeSpecName: "util") pod "3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" (UID: "3a8f90ea-a6d0-4ea4-8573-2ea50493e86e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.560732 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.560794 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-util\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.617366 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-kube-api-access-8zm7g" (OuterVolumeSpecName: "kube-api-access-8zm7g") pod "3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" (UID: "3a8f90ea-a6d0-4ea4-8573-2ea50493e86e"). InnerVolumeSpecName "kube-api-access-8zm7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.661391 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zm7g\" (UniqueName: \"kubernetes.io/projected/3a8f90ea-a6d0-4ea4-8573-2ea50493e86e-kube-api-access-8zm7g\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.740895 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" event={"ID":"3a8f90ea-a6d0-4ea4-8573-2ea50493e86e","Type":"ContainerDied","Data":"dc4c9dd0611e900b83a4fcb99c94058e4ea4cddbf3daa4ddb9b87e15269a37b2"} Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.741019 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc4c9dd0611e900b83a4fcb99c94058e4ea4cddbf3daa4ddb9b87e15269a37b2" Dec 10 14:45:43 crc kubenswrapper[4727]: I1210 14:45:43.740966 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf" Dec 10 14:45:44 crc kubenswrapper[4727]: I1210 14:45:44.748102 4727 generic.go:334] "Generic (PLEG): container finished" podID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerID="87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341" exitCode=0 Dec 10 14:45:44 crc kubenswrapper[4727]: I1210 14:45:44.748212 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8mtg" event={"ID":"afb59750-e50a-4669-bcdd-8d0584d78f06","Type":"ContainerDied","Data":"87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341"} Dec 10 14:45:45 crc kubenswrapper[4727]: I1210 14:45:45.754688 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8mtg" event={"ID":"afb59750-e50a-4669-bcdd-8d0584d78f06","Type":"ContainerStarted","Data":"a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f"} Dec 10 14:45:45 crc kubenswrapper[4727]: I1210 14:45:45.772464 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8mtg" podStartSLOduration=2.307812712 podStartE2EDuration="6.772441878s" podCreationTimestamp="2025-12-10 14:45:39 +0000 UTC" firstStartedPulling="2025-12-10 14:45:40.715508226 +0000 UTC m=+844.910282758" lastFinishedPulling="2025-12-10 14:45:45.180137382 +0000 UTC m=+849.374911924" observedRunningTime="2025-12-10 14:45:45.7716057 +0000 UTC m=+849.966380262" watchObservedRunningTime="2025-12-10 14:45:45.772441878 +0000 UTC m=+849.967216420" Dec 10 14:45:47 crc kubenswrapper[4727]: I1210 14:45:47.652477 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k8b7p"] Dec 10 14:45:47 crc kubenswrapper[4727]: I1210 14:45:47.653372 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovn-controller" containerID="cri-o://cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc" gracePeriod=30 Dec 10 14:45:47 crc kubenswrapper[4727]: I1210 14:45:47.653440 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="northd" containerID="cri-o://a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f" gracePeriod=30 Dec 10 14:45:47 crc kubenswrapper[4727]: I1210 14:45:47.653470 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca" gracePeriod=30 Dec 10 14:45:47 crc kubenswrapper[4727]: I1210 14:45:47.653496 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="sbdb" containerID="cri-o://351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" gracePeriod=30 Dec 10 14:45:47 crc kubenswrapper[4727]: I1210 14:45:47.653510 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovn-acl-logging" containerID="cri-o://d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8" gracePeriod=30 Dec 10 14:45:47 crc kubenswrapper[4727]: I1210 14:45:47.653535 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="nbdb" containerID="cri-o://4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" gracePeriod=30 Dec 10 14:45:47 crc kubenswrapper[4727]: I1210 14:45:47.653486 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kube-rbac-proxy-node" containerID="cri-o://d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331" gracePeriod=30 Dec 10 14:45:47 crc kubenswrapper[4727]: I1210 14:45:47.688569 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" containerID="cri-o://3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43" gracePeriod=30 Dec 10 14:45:47 crc kubenswrapper[4727]: E1210 14:45:47.841959 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 10 14:45:47 crc kubenswrapper[4727]: E1210 14:45:47.842132 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 10 14:45:47 crc kubenswrapper[4727]: E1210 14:45:47.847547 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 10 14:45:47 crc kubenswrapper[4727]: E1210 14:45:47.848174 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 10 14:45:47 crc kubenswrapper[4727]: E1210 14:45:47.851793 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 10 14:45:47 crc kubenswrapper[4727]: E1210 14:45:47.851834 4727 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="sbdb" Dec 10 14:45:47 crc kubenswrapper[4727]: E1210 14:45:47.851923 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 10 14:45:47 crc kubenswrapper[4727]: E1210 14:45:47.851939 4727 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="nbdb" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.050602 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/3.log" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.052884 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovn-acl-logging/0.log" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.053332 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovn-controller/0.log" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.053741 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.118839 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wwnhv"] Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119085 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerName="util" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119097 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerName="util" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119105 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119111 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119118 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119125 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119133 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="nbdb" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119139 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="nbdb" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119148 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kubecfg-setup" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119153 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kubecfg-setup" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119163 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119169 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119175 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="sbdb" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119181 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="sbdb" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119190 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovn-acl-logging" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119195 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovn-acl-logging" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119206 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="northd" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119212 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="northd" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119223 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kube-rbac-proxy-node" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119228 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kube-rbac-proxy-node" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119236 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovn-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119242 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovn-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119249 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerName="pull" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119254 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerName="pull" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119259 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerName="extract" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119264 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerName="extract" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119274 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119279 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119286 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119292 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119380 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="nbdb" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119391 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119399 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kube-rbac-proxy-node" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119407 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovn-acl-logging" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119413 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119419 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119425 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="northd" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119435 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovn-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119443 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119451 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119458 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="sbdb" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119465 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8f90ea-a6d0-4ea4-8573-2ea50493e86e" containerName="extract" Dec 10 14:45:49 crc kubenswrapper[4727]: E1210 14:45:49.119554 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119560 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.119642 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerName="ovnkube-controller" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.121194 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-netd\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249297 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-config\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249320 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovn-node-metrics-cert\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249339 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-var-lib-openvswitch\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249360 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-kubelet\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249370 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249387 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249405 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-bin\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249420 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249427 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-ovn\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249450 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-slash\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249446 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249472 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-netns\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249493 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-systemd-units\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249476 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249513 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-log-socket\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249492 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249519 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-slash" (OuterVolumeSpecName: "host-slash") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249550 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-log-socket" (OuterVolumeSpecName: "log-socket") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-script-lib\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249517 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249511 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249528 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249639 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-openvswitch\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249673 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-env-overrides\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249702 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-ovn-kubernetes\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249745 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-systemd\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249742 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249774 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-node-log\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249805 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-node-log" (OuterVolumeSpecName: "node-log") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249804 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249847 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-etc-openvswitch\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249878 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249896 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249882 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsrrl\" (UniqueName: \"kubernetes.io/projected/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-kube-api-access-lsrrl\") pod \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\" (UID: \"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba\") " Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249925 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250014 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-var-lib-openvswitch\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.249982 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250050 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f3300f7-c251-44e2-ab2d-84ee643c3c41-ovnkube-config\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250072 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f3300f7-c251-44e2-ab2d-84ee643c3c41-ovnkube-script-lib\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250093 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-slash\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250107 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-run-ovn\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250158 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-run-netns\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250213 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-kubelet\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250290 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-run-systemd\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250368 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-cni-bin\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250394 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250427 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-node-log\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f3300f7-c251-44e2-ab2d-84ee643c3c41-ovn-node-metrics-cert\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250521 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-etc-openvswitch\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250551 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-systemd-units\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250582 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-cni-netd\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250603 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f3300f7-c251-44e2-ab2d-84ee643c3c41-env-overrides\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250651 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-run-ovn-kubernetes\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250702 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-log-socket\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250731 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gtn\" (UniqueName: \"kubernetes.io/projected/0f3300f7-c251-44e2-ab2d-84ee643c3c41-kube-api-access-89gtn\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250767 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-run-openvswitch\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250834 4727 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250847 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250859 4727 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250868 4727 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-node-log\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250876 4727 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250887 4727 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250895 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250931 4727 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250941 4727 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250950 4727 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250958 4727 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250967 4727 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250976 4727 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-slash\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250984 4727 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250991 4727 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.250999 4727 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-log-socket\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.251007 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.255758 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.265184 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-kube-api-access-lsrrl" (OuterVolumeSpecName: "kube-api-access-lsrrl") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "kube-api-access-lsrrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.283805 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" (UID: "5b9f88bc-1b6e-4dd4-9d6e-febdde2facba"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351600 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f3300f7-c251-44e2-ab2d-84ee643c3c41-ovnkube-script-lib\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351658 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-slash\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351680 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-run-ovn\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351702 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-kubelet\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351720 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-run-netns\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351751 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-run-systemd\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351777 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-cni-bin\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351804 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-run-systemd\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351819 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-kubelet\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351838 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-cni-bin\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-run-ovn\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-slash\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351844 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351809 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-run-netns\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351811 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.351957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-node-log\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352013 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f3300f7-c251-44e2-ab2d-84ee643c3c41-ovn-node-metrics-cert\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352056 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-etc-openvswitch\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352073 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-node-log\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-systemd-units\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352081 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-systemd-units\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352156 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f3300f7-c251-44e2-ab2d-84ee643c3c41-env-overrides\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352142 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-etc-openvswitch\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352177 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-cni-netd\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352215 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-run-ovn-kubernetes\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352243 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-log-socket\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89gtn\" (UniqueName: \"kubernetes.io/projected/0f3300f7-c251-44e2-ab2d-84ee643c3c41-kube-api-access-89gtn\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352264 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-cni-netd\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352295 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-run-openvswitch\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352319 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-log-socket\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352324 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-var-lib-openvswitch\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-host-run-ovn-kubernetes\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f3300f7-c251-44e2-ab2d-84ee643c3c41-ovnkube-config\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352389 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-run-openvswitch\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352567 4727 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352594 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f3300f7-c251-44e2-ab2d-84ee643c3c41-var-lib-openvswitch\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352610 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsrrl\" (UniqueName: \"kubernetes.io/projected/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-kube-api-access-lsrrl\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352620 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352625 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f3300f7-c251-44e2-ab2d-84ee643c3c41-ovnkube-script-lib\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.352682 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f3300f7-c251-44e2-ab2d-84ee643c3c41-env-overrides\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.353310 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f3300f7-c251-44e2-ab2d-84ee643c3c41-ovnkube-config\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.355211 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f3300f7-c251-44e2-ab2d-84ee643c3c41-ovn-node-metrics-cert\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.372277 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89gtn\" (UniqueName: \"kubernetes.io/projected/0f3300f7-c251-44e2-ab2d-84ee643c3c41-kube-api-access-89gtn\") pod \"ovnkube-node-wwnhv\" (UID: \"0f3300f7-c251-44e2-ab2d-84ee643c3c41\") " pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.436295 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:45:49 crc kubenswrapper[4727]: W1210 14:45:49.454929 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f3300f7_c251_44e2_ab2d_84ee643c3c41.slice/crio-4dea218836efc8fa324e62ae68b24de0d6f8d4456e500550952cd9554695a133 WatchSource:0}: Error finding container 4dea218836efc8fa324e62ae68b24de0d6f8d4456e500550952cd9554695a133: Status 404 returned error can't find the container with id 4dea218836efc8fa324e62ae68b24de0d6f8d4456e500550952cd9554695a133 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.780293 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6ph7v_c724a700-1960-4452-9106-d71685d1b38c/kube-multus/2.log" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.781226 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6ph7v_c724a700-1960-4452-9106-d71685d1b38c/kube-multus/1.log" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.781278 4727 generic.go:334] "Generic (PLEG): container finished" podID="c724a700-1960-4452-9106-d71685d1b38c" containerID="9852f93c90775d9bfb9a2a4dbd2105b8926e215a6ffbd85de1dd7b2ba100b90e" exitCode=2 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.781326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6ph7v" event={"ID":"c724a700-1960-4452-9106-d71685d1b38c","Type":"ContainerDied","Data":"9852f93c90775d9bfb9a2a4dbd2105b8926e215a6ffbd85de1dd7b2ba100b90e"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.781435 4727 scope.go:117] "RemoveContainer" containerID="029e1a2087c1fc515492e739da376e0970f5738dadd2a6842d8dfea64c28fe2f" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.781973 4727 scope.go:117] "RemoveContainer" containerID="9852f93c90775d9bfb9a2a4dbd2105b8926e215a6ffbd85de1dd7b2ba100b90e" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.783318 4727 generic.go:334] "Generic (PLEG): container finished" podID="0f3300f7-c251-44e2-ab2d-84ee643c3c41" containerID="c15be5a3d68f3907af58987fc81dd009e1e80c665e8d8ba533cfb4bab51918b1" exitCode=0 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.783366 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerDied","Data":"c15be5a3d68f3907af58987fc81dd009e1e80c665e8d8ba533cfb4bab51918b1"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.783394 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerStarted","Data":"4dea218836efc8fa324e62ae68b24de0d6f8d4456e500550952cd9554695a133"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.785562 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovnkube-controller/3.log" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.788169 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovn-acl-logging/0.log" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.788670 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k8b7p_5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/ovn-controller/0.log" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789459 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43" exitCode=0 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789479 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" exitCode=0 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789487 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" exitCode=0 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789494 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f" exitCode=0 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789501 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca" exitCode=0 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789507 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331" exitCode=0 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789513 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8" exitCode=143 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789519 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" containerID="cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc" exitCode=143 Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789554 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789564 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789577 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789594 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789604 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789649 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789655 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789661 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789667 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789672 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789678 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789683 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789688 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789693 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789701 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789709 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789715 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789720 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789726 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789731 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789736 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789740 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789746 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789751 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789755 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789763 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789770 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789776 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789781 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789787 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789792 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789797 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789803 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789810 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789815 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789829 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789836 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" event={"ID":"5b9f88bc-1b6e-4dd4-9d6e-febdde2facba","Type":"ContainerDied","Data":"06e2c5a0c013037c04a2af7b1fedb0309e98cc33b0a01350ef0b081de19dfba7"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789843 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789849 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789860 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789866 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789877 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789887 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789893 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789960 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.789970 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.790151 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.790266 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k8b7p" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.825111 4727 scope.go:117] "RemoveContainer" containerID="3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.851712 4727 scope.go:117] "RemoveContainer" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.895564 4727 scope.go:117] "RemoveContainer" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.916310 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k8b7p"] Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.941259 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k8b7p"] Dec 10 14:45:49 crc kubenswrapper[4727]: I1210 14:45:49.951104 4727 scope.go:117] "RemoveContainer" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.042384 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.042463 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.170671 4727 scope.go:117] "RemoveContainer" containerID="a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.191628 4727 scope.go:117] "RemoveContainer" containerID="6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.212235 4727 scope.go:117] "RemoveContainer" containerID="d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.244774 4727 scope.go:117] "RemoveContainer" containerID="d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.455505 4727 scope.go:117] "RemoveContainer" containerID="cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.487383 4727 scope.go:117] "RemoveContainer" containerID="43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.559440 4727 scope.go:117] "RemoveContainer" containerID="3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.559944 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43\": container with ID starting with 3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43 not found: ID does not exist" containerID="3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.559979 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} err="failed to get container status \"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43\": rpc error: code = NotFound desc = could not find container \"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43\": container with ID starting with 3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.560029 4727 scope.go:117] "RemoveContainer" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.560349 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\": container with ID starting with b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12 not found: ID does not exist" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.560396 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} err="failed to get container status \"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\": rpc error: code = NotFound desc = could not find container \"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\": container with ID starting with b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.560428 4727 scope.go:117] "RemoveContainer" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.560843 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\": container with ID starting with 351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da not found: ID does not exist" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.560898 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} err="failed to get container status \"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\": rpc error: code = NotFound desc = could not find container \"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\": container with ID starting with 351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.560951 4727 scope.go:117] "RemoveContainer" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.561314 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\": container with ID starting with 4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360 not found: ID does not exist" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.561350 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} err="failed to get container status \"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\": rpc error: code = NotFound desc = could not find container \"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\": container with ID starting with 4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.561371 4727 scope.go:117] "RemoveContainer" containerID="a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.561731 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\": container with ID starting with a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f not found: ID does not exist" containerID="a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.561768 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} err="failed to get container status \"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\": rpc error: code = NotFound desc = could not find container \"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\": container with ID starting with a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.561789 4727 scope.go:117] "RemoveContainer" containerID="6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.562159 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\": container with ID starting with 6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca not found: ID does not exist" containerID="6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.562186 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} err="failed to get container status \"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\": rpc error: code = NotFound desc = could not find container \"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\": container with ID starting with 6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.562244 4727 scope.go:117] "RemoveContainer" containerID="d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.562564 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\": container with ID starting with d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331 not found: ID does not exist" containerID="d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.562588 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} err="failed to get container status \"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\": rpc error: code = NotFound desc = could not find container \"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\": container with ID starting with d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.562609 4727 scope.go:117] "RemoveContainer" containerID="d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.562816 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\": container with ID starting with d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8 not found: ID does not exist" containerID="d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.562844 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} err="failed to get container status \"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\": rpc error: code = NotFound desc = could not find container \"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\": container with ID starting with d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.562864 4727 scope.go:117] "RemoveContainer" containerID="cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.563057 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\": container with ID starting with cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc not found: ID does not exist" containerID="cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.563078 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} err="failed to get container status \"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\": rpc error: code = NotFound desc = could not find container \"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\": container with ID starting with cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.563091 4727 scope.go:117] "RemoveContainer" containerID="43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706" Dec 10 14:45:50 crc kubenswrapper[4727]: E1210 14:45:50.563280 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\": container with ID starting with 43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706 not found: ID does not exist" containerID="43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.563307 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} err="failed to get container status \"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\": rpc error: code = NotFound desc = could not find container \"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\": container with ID starting with 43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.563323 4727 scope.go:117] "RemoveContainer" containerID="3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.563550 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} err="failed to get container status \"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43\": rpc error: code = NotFound desc = could not find container \"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43\": container with ID starting with 3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.563573 4727 scope.go:117] "RemoveContainer" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.563775 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} err="failed to get container status \"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\": rpc error: code = NotFound desc = could not find container \"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\": container with ID starting with b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.563794 4727 scope.go:117] "RemoveContainer" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.563995 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} err="failed to get container status \"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\": rpc error: code = NotFound desc = could not find container \"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\": container with ID starting with 351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.564015 4727 scope.go:117] "RemoveContainer" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.564207 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} err="failed to get container status \"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\": rpc error: code = NotFound desc = could not find container \"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\": container with ID starting with 4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.564233 4727 scope.go:117] "RemoveContainer" containerID="a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.564425 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} err="failed to get container status \"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\": rpc error: code = NotFound desc = could not find container \"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\": container with ID starting with a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.564442 4727 scope.go:117] "RemoveContainer" containerID="6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.564645 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} err="failed to get container status \"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\": rpc error: code = NotFound desc = could not find container \"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\": container with ID starting with 6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.564664 4727 scope.go:117] "RemoveContainer" containerID="d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.564859 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} err="failed to get container status \"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\": rpc error: code = NotFound desc = could not find container \"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\": container with ID starting with d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.564877 4727 scope.go:117] "RemoveContainer" containerID="d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.565109 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} err="failed to get container status \"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\": rpc error: code = NotFound desc = could not find container \"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\": container with ID starting with d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.565151 4727 scope.go:117] "RemoveContainer" containerID="cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.565983 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} err="failed to get container status \"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\": rpc error: code = NotFound desc = could not find container \"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\": container with ID starting with cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.566004 4727 scope.go:117] "RemoveContainer" containerID="43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.566220 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} err="failed to get container status \"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\": rpc error: code = NotFound desc = could not find container \"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\": container with ID starting with 43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.566242 4727 scope.go:117] "RemoveContainer" containerID="3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.570147 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} err="failed to get container status \"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43\": rpc error: code = NotFound desc = could not find container \"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43\": container with ID starting with 3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.570179 4727 scope.go:117] "RemoveContainer" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.570986 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} err="failed to get container status \"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\": rpc error: code = NotFound desc = could not find container \"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\": container with ID starting with b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.571009 4727 scope.go:117] "RemoveContainer" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.571310 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9f88bc-1b6e-4dd4-9d6e-febdde2facba" path="/var/lib/kubelet/pods/5b9f88bc-1b6e-4dd4-9d6e-febdde2facba/volumes" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.571354 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} err="failed to get container status \"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\": rpc error: code = NotFound desc = could not find container \"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\": container with ID starting with 351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.571370 4727 scope.go:117] "RemoveContainer" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.571635 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} err="failed to get container status \"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\": rpc error: code = NotFound desc = could not find container \"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\": container with ID starting with 4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.571654 4727 scope.go:117] "RemoveContainer" containerID="a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.571849 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} err="failed to get container status \"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\": rpc error: code = NotFound desc = could not find container \"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\": container with ID starting with a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.571874 4727 scope.go:117] "RemoveContainer" containerID="6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.572437 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} err="failed to get container status \"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\": rpc error: code = NotFound desc = could not find container \"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\": container with ID starting with 6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.572464 4727 scope.go:117] "RemoveContainer" containerID="d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.572788 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} err="failed to get container status \"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\": rpc error: code = NotFound desc = could not find container \"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\": container with ID starting with d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.572820 4727 scope.go:117] "RemoveContainer" containerID="d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.573105 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} err="failed to get container status \"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\": rpc error: code = NotFound desc = could not find container \"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\": container with ID starting with d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.573127 4727 scope.go:117] "RemoveContainer" containerID="cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.573361 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} err="failed to get container status \"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\": rpc error: code = NotFound desc = could not find container \"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\": container with ID starting with cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.573380 4727 scope.go:117] "RemoveContainer" containerID="43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.573700 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} err="failed to get container status \"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\": rpc error: code = NotFound desc = could not find container \"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\": container with ID starting with 43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.573751 4727 scope.go:117] "RemoveContainer" containerID="3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.574240 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43"} err="failed to get container status \"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43\": rpc error: code = NotFound desc = could not find container \"3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43\": container with ID starting with 3ce48d64e1e2a68a29178bfb4544634dd3918d31d63624dc0e27ba2088e28a43 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.574262 4727 scope.go:117] "RemoveContainer" containerID="b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.574616 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12"} err="failed to get container status \"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\": rpc error: code = NotFound desc = could not find container \"b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12\": container with ID starting with b796524762da198d126c373e9c85bc1742dc2e8a945f7ff39cc05d01827bcc12 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.574637 4727 scope.go:117] "RemoveContainer" containerID="351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.574895 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da"} err="failed to get container status \"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\": rpc error: code = NotFound desc = could not find container \"351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da\": container with ID starting with 351a209d147314e0cadf34b3a3a9223ab6179845a6780db1e1bdaee8d8fad7da not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.574926 4727 scope.go:117] "RemoveContainer" containerID="4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.575198 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360"} err="failed to get container status \"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\": rpc error: code = NotFound desc = could not find container \"4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360\": container with ID starting with 4df7bfc4f102023f190026dca17eb56214a82c93adad0c9ca002e3c913d20360 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.575218 4727 scope.go:117] "RemoveContainer" containerID="a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.575446 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f"} err="failed to get container status \"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\": rpc error: code = NotFound desc = could not find container \"a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f\": container with ID starting with a16c3e659c3e8fa60f69cb32cdce5e5f6a04fe3683f15539f83f9dadd595747f not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.575467 4727 scope.go:117] "RemoveContainer" containerID="6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.575759 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca"} err="failed to get container status \"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\": rpc error: code = NotFound desc = could not find container \"6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca\": container with ID starting with 6c8930507f2a73de22c48b69a387a565e0454b6007d885b1a306b33c0878b7ca not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.575779 4727 scope.go:117] "RemoveContainer" containerID="d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.575987 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331"} err="failed to get container status \"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\": rpc error: code = NotFound desc = could not find container \"d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331\": container with ID starting with d7846d84025f8674605a0bc30ad661b706afd153502e17ffbe7a950821423331 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.576005 4727 scope.go:117] "RemoveContainer" containerID="d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.576341 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8"} err="failed to get container status \"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\": rpc error: code = NotFound desc = could not find container \"d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8\": container with ID starting with d86081f064a8fc0d0cff84daa3582eb172f5271b78db8734bef99dcc7f17d5d8 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.576367 4727 scope.go:117] "RemoveContainer" containerID="cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.576626 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc"} err="failed to get container status \"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\": rpc error: code = NotFound desc = could not find container \"cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc\": container with ID starting with cb2418c7de951dfa4c6ffa4a450ba1cf9903075139d3f6651778f44f797927bc not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.576645 4727 scope.go:117] "RemoveContainer" containerID="43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.577045 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706"} err="failed to get container status \"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\": rpc error: code = NotFound desc = could not find container \"43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706\": container with ID starting with 43d6a3240e76e1d2d26107ccc6301fad4f46440a1e2a86fcab5cc1fb534ef706 not found: ID does not exist" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.796465 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6ph7v_c724a700-1960-4452-9106-d71685d1b38c/kube-multus/2.log" Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.796549 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6ph7v" event={"ID":"c724a700-1960-4452-9106-d71685d1b38c","Type":"ContainerStarted","Data":"5dd2bab62ba5a3bf43b51eebfc69f4226ae615ec8511df209d8ec23e90b1c2a7"} Dec 10 14:45:50 crc kubenswrapper[4727]: I1210 14:45:50.799432 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerStarted","Data":"cda3a0c0d62a802f51232d96deb3aabd49a02ef311563cfeee28916ab2f52a95"} Dec 10 14:45:51 crc kubenswrapper[4727]: I1210 14:45:51.250184 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z8mtg" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerName="registry-server" probeResult="failure" output=< Dec 10 14:45:51 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 14:45:51 crc kubenswrapper[4727]: > Dec 10 14:45:51 crc kubenswrapper[4727]: I1210 14:45:51.900989 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerStarted","Data":"2abad360e18dbc99c2509fbf25b12f4727971f35d99b8b77d50e0e3f81c02658"} Dec 10 14:45:51 crc kubenswrapper[4727]: I1210 14:45:51.901035 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerStarted","Data":"f15afca74e4512ba47a4a2b2b202c30c99ca123c6e0b5db5d68c3ad56e3d7b82"} Dec 10 14:45:53 crc kubenswrapper[4727]: I1210 14:45:53.933356 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerStarted","Data":"053e6009692b3a6ca8cefc4bad960820cc78a02c67fa34a9019bc771abe5081b"} Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.409206 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6"] Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.410734 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.414706 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.420105 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-wtvvp" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.420323 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.449617 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xhn\" (UniqueName: \"kubernetes.io/projected/81baa0e2-5696-4291-9455-024cb6f22dd5-kube-api-access-k5xhn\") pod \"obo-prometheus-operator-668cf9dfbb-hzsr6\" (UID: \"81baa0e2-5696-4291-9455-024cb6f22dd5\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.534629 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw"] Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.535931 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.538707 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.543085 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-thvng" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.544257 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd"] Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.544939 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.550565 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5xhn\" (UniqueName: \"kubernetes.io/projected/81baa0e2-5696-4291-9455-024cb6f22dd5-kube-api-access-k5xhn\") pod \"obo-prometheus-operator-668cf9dfbb-hzsr6\" (UID: \"81baa0e2-5696-4291-9455-024cb6f22dd5\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.550774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6893fbb-598f-492a-83cc-ad8e77e058e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw\" (UID: \"e6893fbb-598f-492a-83cc-ad8e77e058e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.550889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6893fbb-598f-492a-83cc-ad8e77e058e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw\" (UID: \"e6893fbb-598f-492a-83cc-ad8e77e058e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.580262 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5xhn\" (UniqueName: \"kubernetes.io/projected/81baa0e2-5696-4291-9455-024cb6f22dd5-kube-api-access-k5xhn\") pod \"obo-prometheus-operator-668cf9dfbb-hzsr6\" (UID: \"81baa0e2-5696-4291-9455-024cb6f22dd5\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.652052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6893fbb-598f-492a-83cc-ad8e77e058e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw\" (UID: \"e6893fbb-598f-492a-83cc-ad8e77e058e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.652118 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6893fbb-598f-492a-83cc-ad8e77e058e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw\" (UID: \"e6893fbb-598f-492a-83cc-ad8e77e058e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.652524 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/173dcd53-e8a8-4f7c-a9f0-e923495d8068-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd\" (UID: \"173dcd53-e8a8-4f7c-a9f0-e923495d8068\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.652730 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/173dcd53-e8a8-4f7c-a9f0-e923495d8068-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd\" (UID: \"173dcd53-e8a8-4f7c-a9f0-e923495d8068\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.656044 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6893fbb-598f-492a-83cc-ad8e77e058e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw\" (UID: \"e6893fbb-598f-492a-83cc-ad8e77e058e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.656051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6893fbb-598f-492a-83cc-ad8e77e058e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw\" (UID: \"e6893fbb-598f-492a-83cc-ad8e77e058e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.728029 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-tbccx"] Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.729271 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.730673 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.742117 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-dx2lm" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.742529 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.754818 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27qr\" (UniqueName: \"kubernetes.io/projected/10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c-kube-api-access-b27qr\") pod \"observability-operator-d8bb48f5d-tbccx\" (UID: \"10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c\") " pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.754894 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-tbccx\" (UID: \"10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c\") " pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.755063 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/173dcd53-e8a8-4f7c-a9f0-e923495d8068-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd\" (UID: \"173dcd53-e8a8-4f7c-a9f0-e923495d8068\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.755124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/173dcd53-e8a8-4f7c-a9f0-e923495d8068-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd\" (UID: \"173dcd53-e8a8-4f7c-a9f0-e923495d8068\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.759438 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/173dcd53-e8a8-4f7c-a9f0-e923495d8068-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd\" (UID: \"173dcd53-e8a8-4f7c-a9f0-e923495d8068\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.759675 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/173dcd53-e8a8-4f7c-a9f0-e923495d8068-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd\" (UID: \"173dcd53-e8a8-4f7c-a9f0-e923495d8068\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.771858 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators_81baa0e2-5696-4291-9455-024cb6f22dd5_0(89904f6e58b67cf6c00993c83a07f65a9ad59420ae083938020e81b26d6c36cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.772488 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators_81baa0e2-5696-4291-9455-024cb6f22dd5_0(89904f6e58b67cf6c00993c83a07f65a9ad59420ae083938020e81b26d6c36cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.772545 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators_81baa0e2-5696-4291-9455-024cb6f22dd5_0(89904f6e58b67cf6c00993c83a07f65a9ad59420ae083938020e81b26d6c36cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.772589 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators(81baa0e2-5696-4291-9455-024cb6f22dd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators(81baa0e2-5696-4291-9455-024cb6f22dd5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators_81baa0e2-5696-4291-9455-024cb6f22dd5_0(89904f6e58b67cf6c00993c83a07f65a9ad59420ae083938020e81b26d6c36cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" podUID="81baa0e2-5696-4291-9455-024cb6f22dd5" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.854643 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.856496 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-tbccx\" (UID: \"10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c\") " pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.856664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b27qr\" (UniqueName: \"kubernetes.io/projected/10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c-kube-api-access-b27qr\") pod \"observability-operator-d8bb48f5d-tbccx\" (UID: \"10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c\") " pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.862140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-tbccx\" (UID: \"10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c\") " pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.863561 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.897585 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-d4j2k"] Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.898096 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27qr\" (UniqueName: \"kubernetes.io/projected/10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c-kube-api-access-b27qr\") pod \"observability-operator-d8bb48f5d-tbccx\" (UID: \"10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c\") " pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.898467 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.902782 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-t8r6p" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.937123 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators_e6893fbb-598f-492a-83cc-ad8e77e058e8_0(0d440c621f12ed1af8e7703bdcd4a5ba26d0c636d2f499eea74df9e77479e3ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.937199 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators_e6893fbb-598f-492a-83cc-ad8e77e058e8_0(0d440c621f12ed1af8e7703bdcd4a5ba26d0c636d2f499eea74df9e77479e3ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.937224 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators_e6893fbb-598f-492a-83cc-ad8e77e058e8_0(0d440c621f12ed1af8e7703bdcd4a5ba26d0c636d2f499eea74df9e77479e3ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.937282 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators(e6893fbb-598f-492a-83cc-ad8e77e058e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators(e6893fbb-598f-492a-83cc-ad8e77e058e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators_e6893fbb-598f-492a-83cc-ad8e77e058e8_0(0d440c621f12ed1af8e7703bdcd4a5ba26d0c636d2f499eea74df9e77479e3ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" podUID="e6893fbb-598f-492a-83cc-ad8e77e058e8" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.960122 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators_173dcd53-e8a8-4f7c-a9f0-e923495d8068_0(0cff3868118c64700b4ff6af3d6003114c762232d2e217219f9e36a0c04bfbc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.960206 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators_173dcd53-e8a8-4f7c-a9f0-e923495d8068_0(0cff3868118c64700b4ff6af3d6003114c762232d2e217219f9e36a0c04bfbc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.960231 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators_173dcd53-e8a8-4f7c-a9f0-e923495d8068_0(0cff3868118c64700b4ff6af3d6003114c762232d2e217219f9e36a0c04bfbc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:45:55 crc kubenswrapper[4727]: E1210 14:45:55.960284 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators(173dcd53-e8a8-4f7c-a9f0-e923495d8068)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators(173dcd53-e8a8-4f7c-a9f0-e923495d8068)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators_173dcd53-e8a8-4f7c-a9f0-e923495d8068_0(0cff3868118c64700b4ff6af3d6003114c762232d2e217219f9e36a0c04bfbc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" podUID="173dcd53-e8a8-4f7c-a9f0-e923495d8068" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.961007 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxl2w\" (UniqueName: \"kubernetes.io/projected/7b7103db-15ac-4e33-89e2-50288a5e12dd-kube-api-access-gxl2w\") pod \"perses-operator-5446b9c989-d4j2k\" (UID: \"7b7103db-15ac-4e33-89e2-50288a5e12dd\") " pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.961108 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b7103db-15ac-4e33-89e2-50288a5e12dd-openshift-service-ca\") pod \"perses-operator-5446b9c989-d4j2k\" (UID: \"7b7103db-15ac-4e33-89e2-50288a5e12dd\") " pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.971539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerStarted","Data":"10196205d639d1580fde1812cea39f91c3f76d68b6c979c6d829e74986292593"} Dec 10 14:45:55 crc kubenswrapper[4727]: I1210 14:45:55.971586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerStarted","Data":"e4f6f3cdd180d8149f5852748f4f48706ec004d42d8916bf47049690a13b2701"} Dec 10 14:45:56 crc kubenswrapper[4727]: I1210 14:45:56.061815 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxl2w\" (UniqueName: \"kubernetes.io/projected/7b7103db-15ac-4e33-89e2-50288a5e12dd-kube-api-access-gxl2w\") pod \"perses-operator-5446b9c989-d4j2k\" (UID: \"7b7103db-15ac-4e33-89e2-50288a5e12dd\") " pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:56 crc kubenswrapper[4727]: I1210 14:45:56.061884 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b7103db-15ac-4e33-89e2-50288a5e12dd-openshift-service-ca\") pod \"perses-operator-5446b9c989-d4j2k\" (UID: \"7b7103db-15ac-4e33-89e2-50288a5e12dd\") " pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:56 crc kubenswrapper[4727]: I1210 14:45:56.062671 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b7103db-15ac-4e33-89e2-50288a5e12dd-openshift-service-ca\") pod \"perses-operator-5446b9c989-d4j2k\" (UID: \"7b7103db-15ac-4e33-89e2-50288a5e12dd\") " pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:56 crc kubenswrapper[4727]: I1210 14:45:56.079185 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxl2w\" (UniqueName: \"kubernetes.io/projected/7b7103db-15ac-4e33-89e2-50288a5e12dd-kube-api-access-gxl2w\") pod \"perses-operator-5446b9c989-d4j2k\" (UID: \"7b7103db-15ac-4e33-89e2-50288a5e12dd\") " pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:56 crc kubenswrapper[4727]: I1210 14:45:56.079486 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:56 crc kubenswrapper[4727]: E1210 14:45:56.098214 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tbccx_openshift-operators_10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c_0(7468086f4ce93de2800b34bf72a55de41c2021cf0cb2444f232ae4e4d5358479): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:45:56 crc kubenswrapper[4727]: E1210 14:45:56.098297 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tbccx_openshift-operators_10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c_0(7468086f4ce93de2800b34bf72a55de41c2021cf0cb2444f232ae4e4d5358479): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:56 crc kubenswrapper[4727]: E1210 14:45:56.098323 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tbccx_openshift-operators_10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c_0(7468086f4ce93de2800b34bf72a55de41c2021cf0cb2444f232ae4e4d5358479): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:45:56 crc kubenswrapper[4727]: E1210 14:45:56.098377 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-tbccx_openshift-operators(10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-tbccx_openshift-operators(10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tbccx_openshift-operators_10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c_0(7468086f4ce93de2800b34bf72a55de41c2021cf0cb2444f232ae4e4d5358479): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" podUID="10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c" Dec 10 14:45:56 crc kubenswrapper[4727]: I1210 14:45:56.218933 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:56 crc kubenswrapper[4727]: E1210 14:45:56.245807 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-d4j2k_openshift-operators_7b7103db-15ac-4e33-89e2-50288a5e12dd_0(17fd7400878f278dc7dfb59898de83c2a1b4d63e0d3c749cc8d7d0e9df31e8b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:45:56 crc kubenswrapper[4727]: E1210 14:45:56.245984 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-d4j2k_openshift-operators_7b7103db-15ac-4e33-89e2-50288a5e12dd_0(17fd7400878f278dc7dfb59898de83c2a1b4d63e0d3c749cc8d7d0e9df31e8b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:56 crc kubenswrapper[4727]: E1210 14:45:56.246068 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-d4j2k_openshift-operators_7b7103db-15ac-4e33-89e2-50288a5e12dd_0(17fd7400878f278dc7dfb59898de83c2a1b4d63e0d3c749cc8d7d0e9df31e8b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:45:56 crc kubenswrapper[4727]: E1210 14:45:56.246177 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-d4j2k_openshift-operators(7b7103db-15ac-4e33-89e2-50288a5e12dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-d4j2k_openshift-operators(7b7103db-15ac-4e33-89e2-50288a5e12dd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-d4j2k_openshift-operators_7b7103db-15ac-4e33-89e2-50288a5e12dd_0(17fd7400878f278dc7dfb59898de83c2a1b4d63e0d3c749cc8d7d0e9df31e8b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" podUID="7b7103db-15ac-4e33-89e2-50288a5e12dd" Dec 10 14:45:58 crc kubenswrapper[4727]: I1210 14:45:58.994579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerStarted","Data":"bafba3b9adb5914eb63ad18c331d144b126899d542f42d491648270295e9c43e"} Dec 10 14:46:00 crc kubenswrapper[4727]: I1210 14:46:00.449932 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:46:00 crc kubenswrapper[4727]: I1210 14:46:00.493468 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:46:00 crc kubenswrapper[4727]: I1210 14:46:00.691029 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8mtg"] Dec 10 14:46:02 crc kubenswrapper[4727]: I1210 14:46:02.019022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" event={"ID":"0f3300f7-c251-44e2-ab2d-84ee643c3c41","Type":"ContainerStarted","Data":"6fdf4c09196316386a5a56902b401aa8afa30e4f7cb9d21e4db4213671637b5a"} Dec 10 14:46:02 crc kubenswrapper[4727]: I1210 14:46:02.019432 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z8mtg" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerName="registry-server" containerID="cri-o://a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f" gracePeriod=2 Dec 10 14:46:02 crc kubenswrapper[4727]: I1210 14:46:02.066831 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" podStartSLOduration=13.06679762 podStartE2EDuration="13.06679762s" podCreationTimestamp="2025-12-10 14:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:46:02.059745458 +0000 UTC m=+866.254520000" watchObservedRunningTime="2025-12-10 14:46:02.06679762 +0000 UTC m=+866.261572162" Dec 10 14:46:02 crc kubenswrapper[4727]: I1210 14:46:02.886586 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:46:02 crc kubenswrapper[4727]: I1210 14:46:02.910436 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-catalog-content\") pod \"afb59750-e50a-4669-bcdd-8d0584d78f06\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " Dec 10 14:46:02 crc kubenswrapper[4727]: I1210 14:46:02.910540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-utilities\") pod \"afb59750-e50a-4669-bcdd-8d0584d78f06\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " Dec 10 14:46:02 crc kubenswrapper[4727]: I1210 14:46:02.910597 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgckm\" (UniqueName: \"kubernetes.io/projected/afb59750-e50a-4669-bcdd-8d0584d78f06-kube-api-access-rgckm\") pod \"afb59750-e50a-4669-bcdd-8d0584d78f06\" (UID: \"afb59750-e50a-4669-bcdd-8d0584d78f06\") " Dec 10 14:46:02 crc kubenswrapper[4727]: I1210 14:46:02.912785 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-utilities" (OuterVolumeSpecName: "utilities") pod "afb59750-e50a-4669-bcdd-8d0584d78f06" (UID: "afb59750-e50a-4669-bcdd-8d0584d78f06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:46:02 crc kubenswrapper[4727]: I1210 14:46:02.917619 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb59750-e50a-4669-bcdd-8d0584d78f06-kube-api-access-rgckm" (OuterVolumeSpecName: "kube-api-access-rgckm") pod "afb59750-e50a-4669-bcdd-8d0584d78f06" (UID: "afb59750-e50a-4669-bcdd-8d0584d78f06"). InnerVolumeSpecName "kube-api-access-rgckm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.011846 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.011882 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgckm\" (UniqueName: \"kubernetes.io/projected/afb59750-e50a-4669-bcdd-8d0584d78f06-kube-api-access-rgckm\") on node \"crc\" DevicePath \"\"" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.028576 4727 generic.go:334] "Generic (PLEG): container finished" podID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerID="a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f" exitCode=0 Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.029242 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8mtg" event={"ID":"afb59750-e50a-4669-bcdd-8d0584d78f06","Type":"ContainerDied","Data":"a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f"} Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.029351 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8mtg" event={"ID":"afb59750-e50a-4669-bcdd-8d0584d78f06","Type":"ContainerDied","Data":"174c32be8a68c5fe2449d6282e61f78927ebd257f7f52fe0842f45d9120a5b94"} Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.029279 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8mtg" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.029446 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.029569 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.029633 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.029713 4727 scope.go:117] "RemoveContainer" containerID="a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.037703 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afb59750-e50a-4669-bcdd-8d0584d78f06" (UID: "afb59750-e50a-4669-bcdd-8d0584d78f06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.061156 4727 scope.go:117] "RemoveContainer" containerID="87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.070638 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.081108 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.086402 4727 scope.go:117] "RemoveContainer" containerID="c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.099100 4727 scope.go:117] "RemoveContainer" containerID="a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.099470 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f\": container with ID starting with a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f not found: ID does not exist" containerID="a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.099503 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f"} err="failed to get container status \"a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f\": rpc error: code = NotFound desc = could not find container \"a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f\": container with ID starting with a78116fab72f651d364467bc7f9057820ba406760aace4d0e7d09c4d87ce469f not found: ID does not exist" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.099525 4727 scope.go:117] "RemoveContainer" containerID="87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.099765 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341\": container with ID starting with 87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341 not found: ID does not exist" containerID="87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.099797 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341"} err="failed to get container status \"87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341\": rpc error: code = NotFound desc = could not find container \"87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341\": container with ID starting with 87b6866b78d3357718ce16086d39db219f3e2a4225caa43290c880aec6c0f341 not found: ID does not exist" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.099817 4727 scope.go:117] "RemoveContainer" containerID="c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.100191 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e\": container with ID starting with c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e not found: ID does not exist" containerID="c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.100218 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e"} err="failed to get container status \"c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e\": rpc error: code = NotFound desc = could not find container \"c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e\": container with ID starting with c5f596854ae5521bc09321ffea26d1e8a7062640aad6e37d47367d07c8de651e not found: ID does not exist" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.112908 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb59750-e50a-4669-bcdd-8d0584d78f06-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.361133 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8mtg"] Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.363936 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z8mtg"] Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.421951 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw"] Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.422106 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.422584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.426480 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd"] Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.426636 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.427144 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.441705 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-d4j2k"] Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.441844 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.442465 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.468707 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-tbccx"] Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.468844 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.469102 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators_173dcd53-e8a8-4f7c-a9f0-e923495d8068_0(8491052b00d7404be76fb954ef1807fbc4c0e110a5727e02768defb14edaeff1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.469187 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators_173dcd53-e8a8-4f7c-a9f0-e923495d8068_0(8491052b00d7404be76fb954ef1807fbc4c0e110a5727e02768defb14edaeff1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.469215 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators_173dcd53-e8a8-4f7c-a9f0-e923495d8068_0(8491052b00d7404be76fb954ef1807fbc4c0e110a5727e02768defb14edaeff1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.469275 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators(173dcd53-e8a8-4f7c-a9f0-e923495d8068)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators(173dcd53-e8a8-4f7c-a9f0-e923495d8068)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_openshift-operators_173dcd53-e8a8-4f7c-a9f0-e923495d8068_0(8491052b00d7404be76fb954ef1807fbc4c0e110a5727e02768defb14edaeff1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" podUID="173dcd53-e8a8-4f7c-a9f0-e923495d8068" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.469323 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.490340 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators_e6893fbb-598f-492a-83cc-ad8e77e058e8_0(c710b9a4adba501b0dc0d9b8e2ba358f288a6989d105675f6fe196d94b1caed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.490417 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators_e6893fbb-598f-492a-83cc-ad8e77e058e8_0(c710b9a4adba501b0dc0d9b8e2ba358f288a6989d105675f6fe196d94b1caed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.490447 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators_e6893fbb-598f-492a-83cc-ad8e77e058e8_0(c710b9a4adba501b0dc0d9b8e2ba358f288a6989d105675f6fe196d94b1caed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.490502 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators(e6893fbb-598f-492a-83cc-ad8e77e058e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators(e6893fbb-598f-492a-83cc-ad8e77e058e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_openshift-operators_e6893fbb-598f-492a-83cc-ad8e77e058e8_0(c710b9a4adba501b0dc0d9b8e2ba358f288a6989d105675f6fe196d94b1caed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" podUID="e6893fbb-598f-492a-83cc-ad8e77e058e8" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.501208 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6"] Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.501331 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:46:03 crc kubenswrapper[4727]: I1210 14:46:03.501717 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.515112 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-d4j2k_openshift-operators_7b7103db-15ac-4e33-89e2-50288a5e12dd_0(174cc656afe21d878b5cc2b307ead67ee316238a5074f16f2dc78df4d01cac10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.515171 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-d4j2k_openshift-operators_7b7103db-15ac-4e33-89e2-50288a5e12dd_0(174cc656afe21d878b5cc2b307ead67ee316238a5074f16f2dc78df4d01cac10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.515193 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-d4j2k_openshift-operators_7b7103db-15ac-4e33-89e2-50288a5e12dd_0(174cc656afe21d878b5cc2b307ead67ee316238a5074f16f2dc78df4d01cac10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.515238 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-d4j2k_openshift-operators(7b7103db-15ac-4e33-89e2-50288a5e12dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-d4j2k_openshift-operators(7b7103db-15ac-4e33-89e2-50288a5e12dd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-d4j2k_openshift-operators_7b7103db-15ac-4e33-89e2-50288a5e12dd_0(174cc656afe21d878b5cc2b307ead67ee316238a5074f16f2dc78df4d01cac10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" podUID="7b7103db-15ac-4e33-89e2-50288a5e12dd" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.529811 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tbccx_openshift-operators_10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c_0(d62cea5dc6b8e81ba9032aa28e28ad72a616a2d6deb5743fb1bfd30a09b85dbe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.529973 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tbccx_openshift-operators_10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c_0(d62cea5dc6b8e81ba9032aa28e28ad72a616a2d6deb5743fb1bfd30a09b85dbe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.530050 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tbccx_openshift-operators_10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c_0(d62cea5dc6b8e81ba9032aa28e28ad72a616a2d6deb5743fb1bfd30a09b85dbe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.530178 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-tbccx_openshift-operators(10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-tbccx_openshift-operators(10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tbccx_openshift-operators_10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c_0(d62cea5dc6b8e81ba9032aa28e28ad72a616a2d6deb5743fb1bfd30a09b85dbe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" podUID="10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.545154 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators_81baa0e2-5696-4291-9455-024cb6f22dd5_0(afcfde2fcd5a5ccb65290cdf3a6f982233bf28a2a839b220620d742674385e7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.545236 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators_81baa0e2-5696-4291-9455-024cb6f22dd5_0(afcfde2fcd5a5ccb65290cdf3a6f982233bf28a2a839b220620d742674385e7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.545257 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators_81baa0e2-5696-4291-9455-024cb6f22dd5_0(afcfde2fcd5a5ccb65290cdf3a6f982233bf28a2a839b220620d742674385e7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:46:03 crc kubenswrapper[4727]: E1210 14:46:03.545309 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators(81baa0e2-5696-4291-9455-024cb6f22dd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators(81baa0e2-5696-4291-9455-024cb6f22dd5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-hzsr6_openshift-operators_81baa0e2-5696-4291-9455-024cb6f22dd5_0(afcfde2fcd5a5ccb65290cdf3a6f982233bf28a2a839b220620d742674385e7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" podUID="81baa0e2-5696-4291-9455-024cb6f22dd5" Dec 10 14:46:04 crc kubenswrapper[4727]: I1210 14:46:04.571270 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" path="/var/lib/kubelet/pods/afb59750-e50a-4669-bcdd-8d0584d78f06/volumes" Dec 10 14:46:07 crc kubenswrapper[4727]: I1210 14:46:07.723588 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:46:07 crc kubenswrapper[4727]: I1210 14:46:07.723664 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:46:07 crc kubenswrapper[4727]: I1210 14:46:07.723716 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:46:07 crc kubenswrapper[4727]: I1210 14:46:07.724474 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd46d2062fb92e117b59daaca2ef5ffa90b444c25a1b8e3e5c4e2bdf99695cf9"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:46:07 crc kubenswrapper[4727]: I1210 14:46:07.724558 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://cd46d2062fb92e117b59daaca2ef5ffa90b444c25a1b8e3e5c4e2bdf99695cf9" gracePeriod=600 Dec 10 14:46:08 crc kubenswrapper[4727]: I1210 14:46:08.058607 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="cd46d2062fb92e117b59daaca2ef5ffa90b444c25a1b8e3e5c4e2bdf99695cf9" exitCode=0 Dec 10 14:46:08 crc kubenswrapper[4727]: I1210 14:46:08.058692 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"cd46d2062fb92e117b59daaca2ef5ffa90b444c25a1b8e3e5c4e2bdf99695cf9"} Dec 10 14:46:08 crc kubenswrapper[4727]: I1210 14:46:08.059008 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"d4da8f537ad153791693a45193621652b13001e5dd72906f744d548921ad04f8"} Dec 10 14:46:08 crc kubenswrapper[4727]: I1210 14:46:08.059037 4727 scope.go:117] "RemoveContainer" containerID="6962415495a564f00fc2a840efe560b2ce35c3f78b92a0cc2b2c5231d5185282" Dec 10 14:46:15 crc kubenswrapper[4727]: I1210 14:46:15.562383 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:46:15 crc kubenswrapper[4727]: I1210 14:46:15.562503 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:46:15 crc kubenswrapper[4727]: I1210 14:46:15.563779 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:46:15 crc kubenswrapper[4727]: I1210 14:46:15.564176 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" Dec 10 14:46:15 crc kubenswrapper[4727]: I1210 14:46:15.947574 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd"] Dec 10 14:46:16 crc kubenswrapper[4727]: I1210 14:46:16.120221 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-d4j2k"] Dec 10 14:46:16 crc kubenswrapper[4727]: W1210 14:46:16.134166 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b7103db_15ac_4e33_89e2_50288a5e12dd.slice/crio-17aafb161cd9f60736b2b41707418dea75dfadae46e22dd5c24f5fb299708a6c WatchSource:0}: Error finding container 17aafb161cd9f60736b2b41707418dea75dfadae46e22dd5c24f5fb299708a6c: Status 404 returned error can't find the container with id 17aafb161cd9f60736b2b41707418dea75dfadae46e22dd5c24f5fb299708a6c Dec 10 14:46:16 crc kubenswrapper[4727]: I1210 14:46:16.397836 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" event={"ID":"7b7103db-15ac-4e33-89e2-50288a5e12dd","Type":"ContainerStarted","Data":"17aafb161cd9f60736b2b41707418dea75dfadae46e22dd5c24f5fb299708a6c"} Dec 10 14:46:16 crc kubenswrapper[4727]: I1210 14:46:16.400154 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" event={"ID":"173dcd53-e8a8-4f7c-a9f0-e923495d8068","Type":"ContainerStarted","Data":"2e356f738a4179137e62daeb363557a0609e35906949cf5d25035ce0055e94f6"} Dec 10 14:46:16 crc kubenswrapper[4727]: I1210 14:46:16.567058 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:46:16 crc kubenswrapper[4727]: I1210 14:46:16.567364 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" Dec 10 14:46:16 crc kubenswrapper[4727]: I1210 14:46:16.787664 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6"] Dec 10 14:46:17 crc kubenswrapper[4727]: I1210 14:46:17.568302 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:46:17 crc kubenswrapper[4727]: I1210 14:46:17.569067 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" Dec 10 14:46:17 crc kubenswrapper[4727]: I1210 14:46:17.569466 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:46:17 crc kubenswrapper[4727]: I1210 14:46:17.569660 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:46:17 crc kubenswrapper[4727]: I1210 14:46:17.614359 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" event={"ID":"81baa0e2-5696-4291-9455-024cb6f22dd5","Type":"ContainerStarted","Data":"3ee44e137b432ed644b493301e3c892b648b272747b6e2ea6dff2a4b76eea49f"} Dec 10 14:46:18 crc kubenswrapper[4727]: I1210 14:46:18.318018 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw"] Dec 10 14:46:18 crc kubenswrapper[4727]: W1210 14:46:18.382373 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6893fbb_598f_492a_83cc_ad8e77e058e8.slice/crio-2dd28a184bb5f66fee3061458df06fcf4715672cafe87220a8c1aa1281d8c496 WatchSource:0}: Error finding container 2dd28a184bb5f66fee3061458df06fcf4715672cafe87220a8c1aa1281d8c496: Status 404 returned error can't find the container with id 2dd28a184bb5f66fee3061458df06fcf4715672cafe87220a8c1aa1281d8c496 Dec 10 14:46:18 crc kubenswrapper[4727]: I1210 14:46:18.416366 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-tbccx"] Dec 10 14:46:19 crc kubenswrapper[4727]: I1210 14:46:19.033169 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" event={"ID":"10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c","Type":"ContainerStarted","Data":"ea78e90e9251c976dbdb590b599a1461ed71ae0457da390d1a9b1cada6714623"} Dec 10 14:46:19 crc kubenswrapper[4727]: I1210 14:46:19.038517 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" event={"ID":"e6893fbb-598f-492a-83cc-ad8e77e058e8","Type":"ContainerStarted","Data":"2dd28a184bb5f66fee3061458df06fcf4715672cafe87220a8c1aa1281d8c496"} Dec 10 14:46:19 crc kubenswrapper[4727]: I1210 14:46:19.535416 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wwnhv" Dec 10 14:46:35 crc kubenswrapper[4727]: E1210 14:46:35.898169 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" Dec 10 14:46:35 crc kubenswrapper[4727]: E1210 14:46:35.898883 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxl2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5446b9c989-d4j2k_openshift-operators(7b7103db-15ac-4e33-89e2-50288a5e12dd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:46:35 crc kubenswrapper[4727]: E1210 14:46:35.900085 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" podUID="7b7103db-15ac-4e33-89e2-50288a5e12dd" Dec 10 14:46:36 crc kubenswrapper[4727]: E1210 14:46:36.716514 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385\\\"\"" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" podUID="7b7103db-15ac-4e33-89e2-50288a5e12dd" Dec 10 14:46:39 crc kubenswrapper[4727]: I1210 14:46:39.735399 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" event={"ID":"173dcd53-e8a8-4f7c-a9f0-e923495d8068","Type":"ContainerStarted","Data":"e9d632f2ea4cf41f6fd3982af5f234d0fff8663b9d05ef3307fbc9c9ca85fcaa"} Dec 10 14:46:39 crc kubenswrapper[4727]: I1210 14:46:39.755373 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" event={"ID":"81baa0e2-5696-4291-9455-024cb6f22dd5","Type":"ContainerStarted","Data":"4b262eb8e726d9cba881292d03f773fd2c58499bcab679f906fa7e90c8e5f613"} Dec 10 14:46:39 crc kubenswrapper[4727]: I1210 14:46:39.881564 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" event={"ID":"e6893fbb-598f-492a-83cc-ad8e77e058e8","Type":"ContainerStarted","Data":"f3bb8e3f774b35cec6a6ce698d546877e8f25b7538f7c1c9c5ddb06246c30daa"} Dec 10 14:46:39 crc kubenswrapper[4727]: I1210 14:46:39.892364 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" event={"ID":"10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c","Type":"ContainerStarted","Data":"ad737a915fc1ee13188583c8eaac397925f99e987e2afd28bbb4dd361c3c4219"} Dec 10 14:46:39 crc kubenswrapper[4727]: I1210 14:46:39.892801 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:46:39 crc kubenswrapper[4727]: I1210 14:46:39.894226 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd" podStartSLOduration=22.306413271 podStartE2EDuration="44.894184656s" podCreationTimestamp="2025-12-10 14:45:55 +0000 UTC" firstStartedPulling="2025-12-10 14:46:15.956902145 +0000 UTC m=+880.151676687" lastFinishedPulling="2025-12-10 14:46:38.54467353 +0000 UTC m=+902.739448072" observedRunningTime="2025-12-10 14:46:39.890484103 +0000 UTC m=+904.085258645" watchObservedRunningTime="2025-12-10 14:46:39.894184656 +0000 UTC m=+904.088959198" Dec 10 14:46:39 crc kubenswrapper[4727]: I1210 14:46:39.911305 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" Dec 10 14:46:39 crc kubenswrapper[4727]: I1210 14:46:39.924683 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hzsr6" podStartSLOduration=23.201147995 podStartE2EDuration="44.924665952s" podCreationTimestamp="2025-12-10 14:45:55 +0000 UTC" firstStartedPulling="2025-12-10 14:46:16.821154333 +0000 UTC m=+881.015928875" lastFinishedPulling="2025-12-10 14:46:38.54467229 +0000 UTC m=+902.739446832" observedRunningTime="2025-12-10 14:46:39.9233953 +0000 UTC m=+904.118169842" watchObservedRunningTime="2025-12-10 14:46:39.924665952 +0000 UTC m=+904.119440494" Dec 10 14:46:39 crc kubenswrapper[4727]: I1210 14:46:39.971226 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw" podStartSLOduration=24.829118606 podStartE2EDuration="44.971209071s" podCreationTimestamp="2025-12-10 14:45:55 +0000 UTC" firstStartedPulling="2025-12-10 14:46:18.402586945 +0000 UTC m=+882.597361487" lastFinishedPulling="2025-12-10 14:46:38.54467741 +0000 UTC m=+902.739451952" observedRunningTime="2025-12-10 14:46:39.969194271 +0000 UTC m=+904.163968813" watchObservedRunningTime="2025-12-10 14:46:39.971209071 +0000 UTC m=+904.165983613" Dec 10 14:46:40 crc kubenswrapper[4727]: I1210 14:46:40.015045 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-tbccx" podStartSLOduration=24.862941111 podStartE2EDuration="45.015022332s" podCreationTimestamp="2025-12-10 14:45:55 +0000 UTC" firstStartedPulling="2025-12-10 14:46:18.437066106 +0000 UTC m=+882.631840648" lastFinishedPulling="2025-12-10 14:46:38.589147327 +0000 UTC m=+902.783921869" observedRunningTime="2025-12-10 14:46:40.010876188 +0000 UTC m=+904.205650730" watchObservedRunningTime="2025-12-10 14:46:40.015022332 +0000 UTC m=+904.209796874" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.166570 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gwq6p"] Dec 10 14:46:47 crc kubenswrapper[4727]: E1210 14:46:47.167481 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerName="extract-utilities" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.167496 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerName="extract-utilities" Dec 10 14:46:47 crc kubenswrapper[4727]: E1210 14:46:47.167529 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerName="extract-content" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.167535 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerName="extract-content" Dec 10 14:46:47 crc kubenswrapper[4727]: E1210 14:46:47.167546 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerName="registry-server" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.167552 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerName="registry-server" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.167657 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb59750-e50a-4669-bcdd-8d0584d78f06" containerName="registry-server" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.168232 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-gwq6p" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.170560 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.170771 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.174510 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gwq6p"] Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.182169 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-r6b6k"] Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.182942 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-r6b6k" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.183248 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-lgss8" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.184570 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9jthl" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.201836 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-r6b6k"] Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.205139 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-lqpdt"] Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.206102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.207640 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kklb5" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.227283 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-lqpdt"] Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.366033 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbc24\" (UniqueName: \"kubernetes.io/projected/f5ae0272-0a54-4f54-99c5-8aa79709f8b7-kube-api-access-fbc24\") pod \"cert-manager-5b446d88c5-r6b6k\" (UID: \"f5ae0272-0a54-4f54-99c5-8aa79709f8b7\") " pod="cert-manager/cert-manager-5b446d88c5-r6b6k" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.366124 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7zjp\" (UniqueName: \"kubernetes.io/projected/03818f03-7545-4ec1-9ad4-87ee47095668-kube-api-access-k7zjp\") pod \"cert-manager-cainjector-7f985d654d-gwq6p\" (UID: \"03818f03-7545-4ec1-9ad4-87ee47095668\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gwq6p" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.366200 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxnm\" (UniqueName: \"kubernetes.io/projected/97e964eb-2a4f-4bd4-8564-ef21f729095b-kube-api-access-xnxnm\") pod \"cert-manager-webhook-5655c58dd6-lqpdt\" (UID: \"97e964eb-2a4f-4bd4-8564-ef21f729095b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.466937 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbc24\" (UniqueName: \"kubernetes.io/projected/f5ae0272-0a54-4f54-99c5-8aa79709f8b7-kube-api-access-fbc24\") pod \"cert-manager-5b446d88c5-r6b6k\" (UID: \"f5ae0272-0a54-4f54-99c5-8aa79709f8b7\") " pod="cert-manager/cert-manager-5b446d88c5-r6b6k" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.467009 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7zjp\" (UniqueName: \"kubernetes.io/projected/03818f03-7545-4ec1-9ad4-87ee47095668-kube-api-access-k7zjp\") pod \"cert-manager-cainjector-7f985d654d-gwq6p\" (UID: \"03818f03-7545-4ec1-9ad4-87ee47095668\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gwq6p" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.467124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnxnm\" (UniqueName: \"kubernetes.io/projected/97e964eb-2a4f-4bd4-8564-ef21f729095b-kube-api-access-xnxnm\") pod \"cert-manager-webhook-5655c58dd6-lqpdt\" (UID: \"97e964eb-2a4f-4bd4-8564-ef21f729095b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.485849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnxnm\" (UniqueName: \"kubernetes.io/projected/97e964eb-2a4f-4bd4-8564-ef21f729095b-kube-api-access-xnxnm\") pod \"cert-manager-webhook-5655c58dd6-lqpdt\" (UID: \"97e964eb-2a4f-4bd4-8564-ef21f729095b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.486749 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7zjp\" (UniqueName: \"kubernetes.io/projected/03818f03-7545-4ec1-9ad4-87ee47095668-kube-api-access-k7zjp\") pod \"cert-manager-cainjector-7f985d654d-gwq6p\" (UID: \"03818f03-7545-4ec1-9ad4-87ee47095668\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gwq6p" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.487185 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbc24\" (UniqueName: \"kubernetes.io/projected/f5ae0272-0a54-4f54-99c5-8aa79709f8b7-kube-api-access-fbc24\") pod \"cert-manager-5b446d88c5-r6b6k\" (UID: \"f5ae0272-0a54-4f54-99c5-8aa79709f8b7\") " pod="cert-manager/cert-manager-5b446d88c5-r6b6k" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.518199 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-gwq6p" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.529408 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-r6b6k" Dec 10 14:46:47 crc kubenswrapper[4727]: I1210 14:46:47.540008 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" Dec 10 14:46:48 crc kubenswrapper[4727]: I1210 14:46:48.109390 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gwq6p"] Dec 10 14:46:48 crc kubenswrapper[4727]: I1210 14:46:48.153343 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-lqpdt"] Dec 10 14:46:48 crc kubenswrapper[4727]: W1210 14:46:48.156453 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97e964eb_2a4f_4bd4_8564_ef21f729095b.slice/crio-07d0173427d19fe35961ceff8f82acaf0c9e973694acc9c39eadde8ad541e9da WatchSource:0}: Error finding container 07d0173427d19fe35961ceff8f82acaf0c9e973694acc9c39eadde8ad541e9da: Status 404 returned error can't find the container with id 07d0173427d19fe35961ceff8f82acaf0c9e973694acc9c39eadde8ad541e9da Dec 10 14:46:48 crc kubenswrapper[4727]: I1210 14:46:48.194618 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-r6b6k"] Dec 10 14:46:48 crc kubenswrapper[4727]: W1210 14:46:48.198549 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5ae0272_0a54_4f54_99c5_8aa79709f8b7.slice/crio-5d3efa856bb56fcb3bfce234934dcc73d1d9d3233a7b3162a3617868be14cbf7 WatchSource:0}: Error finding container 5d3efa856bb56fcb3bfce234934dcc73d1d9d3233a7b3162a3617868be14cbf7: Status 404 returned error can't find the container with id 5d3efa856bb56fcb3bfce234934dcc73d1d9d3233a7b3162a3617868be14cbf7 Dec 10 14:46:49 crc kubenswrapper[4727]: I1210 14:46:49.032365 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-r6b6k" event={"ID":"f5ae0272-0a54-4f54-99c5-8aa79709f8b7","Type":"ContainerStarted","Data":"5d3efa856bb56fcb3bfce234934dcc73d1d9d3233a7b3162a3617868be14cbf7"} Dec 10 14:46:49 crc kubenswrapper[4727]: I1210 14:46:49.034417 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-gwq6p" event={"ID":"03818f03-7545-4ec1-9ad4-87ee47095668","Type":"ContainerStarted","Data":"75881b81394203bbb2e5f0f59c7687c4d895c6655dd0ddf444f4f78f1171cb00"} Dec 10 14:46:49 crc kubenswrapper[4727]: I1210 14:46:49.035725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" event={"ID":"97e964eb-2a4f-4bd4-8564-ef21f729095b","Type":"ContainerStarted","Data":"07d0173427d19fe35961ceff8f82acaf0c9e973694acc9c39eadde8ad541e9da"} Dec 10 14:46:50 crc kubenswrapper[4727]: I1210 14:46:50.056401 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" event={"ID":"7b7103db-15ac-4e33-89e2-50288a5e12dd","Type":"ContainerStarted","Data":"97efffb05824c1e5e83920f6097f954dee814da0f0057c26b421b4350cbcbb89"} Dec 10 14:46:50 crc kubenswrapper[4727]: I1210 14:46:50.056713 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:46:50 crc kubenswrapper[4727]: I1210 14:46:50.080194 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" podStartSLOduration=22.15059421 podStartE2EDuration="55.080172571s" podCreationTimestamp="2025-12-10 14:45:55 +0000 UTC" firstStartedPulling="2025-12-10 14:46:16.136555711 +0000 UTC m=+880.331330263" lastFinishedPulling="2025-12-10 14:46:49.066134082 +0000 UTC m=+913.260908624" observedRunningTime="2025-12-10 14:46:50.076843027 +0000 UTC m=+914.271617569" watchObservedRunningTime="2025-12-10 14:46:50.080172571 +0000 UTC m=+914.274947113" Dec 10 14:46:53 crc kubenswrapper[4727]: I1210 14:46:53.075053 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" event={"ID":"97e964eb-2a4f-4bd4-8564-ef21f729095b","Type":"ContainerStarted","Data":"36603fb614c84ba854d3f262e11da866b575369aa2e74d1e8488cf76a88bae53"} Dec 10 14:46:53 crc kubenswrapper[4727]: I1210 14:46:53.075680 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" Dec 10 14:46:53 crc kubenswrapper[4727]: I1210 14:46:53.077314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-r6b6k" event={"ID":"f5ae0272-0a54-4f54-99c5-8aa79709f8b7","Type":"ContainerStarted","Data":"41ba2b86f017615e322327452ff9caeafa3ee1e362e3ac179d7835ca013f2032"} Dec 10 14:46:53 crc kubenswrapper[4727]: I1210 14:46:53.079273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-gwq6p" event={"ID":"03818f03-7545-4ec1-9ad4-87ee47095668","Type":"ContainerStarted","Data":"11ef9fb885f7144060726947420d32348c67a222a0544e8f27b3854c34b7fb2c"} Dec 10 14:46:53 crc kubenswrapper[4727]: I1210 14:46:53.090921 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" podStartSLOduration=1.557432092 podStartE2EDuration="6.090882056s" podCreationTimestamp="2025-12-10 14:46:47 +0000 UTC" firstStartedPulling="2025-12-10 14:46:48.167297809 +0000 UTC m=+912.362072351" lastFinishedPulling="2025-12-10 14:46:52.700747763 +0000 UTC m=+916.895522315" observedRunningTime="2025-12-10 14:46:53.089343527 +0000 UTC m=+917.284118089" watchObservedRunningTime="2025-12-10 14:46:53.090882056 +0000 UTC m=+917.285656598" Dec 10 14:46:53 crc kubenswrapper[4727]: I1210 14:46:53.108868 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-r6b6k" podStartSLOduration=1.602614876 podStartE2EDuration="6.108845257s" podCreationTimestamp="2025-12-10 14:46:47 +0000 UTC" firstStartedPulling="2025-12-10 14:46:48.202702498 +0000 UTC m=+912.397477040" lastFinishedPulling="2025-12-10 14:46:52.708932879 +0000 UTC m=+916.903707421" observedRunningTime="2025-12-10 14:46:53.107017911 +0000 UTC m=+917.301792473" watchObservedRunningTime="2025-12-10 14:46:53.108845257 +0000 UTC m=+917.303619799" Dec 10 14:46:53 crc kubenswrapper[4727]: I1210 14:46:53.139109 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-gwq6p" podStartSLOduration=1.550287862 podStartE2EDuration="6.139091807s" podCreationTimestamp="2025-12-10 14:46:47 +0000 UTC" firstStartedPulling="2025-12-10 14:46:48.122254337 +0000 UTC m=+912.317028879" lastFinishedPulling="2025-12-10 14:46:52.711058282 +0000 UTC m=+916.905832824" observedRunningTime="2025-12-10 14:46:53.135988509 +0000 UTC m=+917.330763051" watchObservedRunningTime="2025-12-10 14:46:53.139091807 +0000 UTC m=+917.333866349" Dec 10 14:46:56 crc kubenswrapper[4727]: I1210 14:46:56.222582 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-d4j2k" Dec 10 14:46:57 crc kubenswrapper[4727]: I1210 14:46:57.548093 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-lqpdt" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.333580 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6"] Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.335334 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.338038 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.346843 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6"] Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.606635 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8bzh\" (UniqueName: \"kubernetes.io/projected/163d48a4-75d1-458f-96a2-18760ac78989-kube-api-access-x8bzh\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.606747 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.606784 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.707334 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.707416 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.707464 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8bzh\" (UniqueName: \"kubernetes.io/projected/163d48a4-75d1-458f-96a2-18760ac78989-kube-api-access-x8bzh\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.708134 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.708244 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.734341 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8bzh\" (UniqueName: \"kubernetes.io/projected/163d48a4-75d1-458f-96a2-18760ac78989-kube-api-access-x8bzh\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:26 crc kubenswrapper[4727]: I1210 14:47:26.966513 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:27 crc kubenswrapper[4727]: I1210 14:47:27.214859 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6"] Dec 10 14:47:27 crc kubenswrapper[4727]: I1210 14:47:27.297664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" event={"ID":"163d48a4-75d1-458f-96a2-18760ac78989","Type":"ContainerStarted","Data":"07aa9ec3ad8870b8ece3416bec6f4f4f1fb0f92fb50873404a0f55a98e208d03"} Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.217932 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.220124 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.223563 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.223792 4727 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-9wc4l" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.224672 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.229359 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.305069 4727 generic.go:334] "Generic (PLEG): container finished" podID="163d48a4-75d1-458f-96a2-18760ac78989" containerID="02d552f15a5a2265e0bcbf95a9bc75c8d891df6d4897ff12ea86134cee72f648" exitCode=0 Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.305153 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" event={"ID":"163d48a4-75d1-458f-96a2-18760ac78989","Type":"ContainerDied","Data":"02d552f15a5a2265e0bcbf95a9bc75c8d891df6d4897ff12ea86134cee72f648"} Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.329129 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f2a870f3-183e-4870-8014-081543ef5cce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2a870f3-183e-4870-8014-081543ef5cce\") pod \"minio\" (UID: \"858f07c9-5c19-401b-afa7-ceca4188da5d\") " pod="minio-dev/minio" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.329170 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9hsm\" (UniqueName: \"kubernetes.io/projected/858f07c9-5c19-401b-afa7-ceca4188da5d-kube-api-access-t9hsm\") pod \"minio\" (UID: \"858f07c9-5c19-401b-afa7-ceca4188da5d\") " pod="minio-dev/minio" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.435599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f2a870f3-183e-4870-8014-081543ef5cce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2a870f3-183e-4870-8014-081543ef5cce\") pod \"minio\" (UID: \"858f07c9-5c19-401b-afa7-ceca4188da5d\") " pod="minio-dev/minio" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.435862 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9hsm\" (UniqueName: \"kubernetes.io/projected/858f07c9-5c19-401b-afa7-ceca4188da5d-kube-api-access-t9hsm\") pod \"minio\" (UID: \"858f07c9-5c19-401b-afa7-ceca4188da5d\") " pod="minio-dev/minio" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.445930 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.445970 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f2a870f3-183e-4870-8014-081543ef5cce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2a870f3-183e-4870-8014-081543ef5cce\") pod \"minio\" (UID: \"858f07c9-5c19-401b-afa7-ceca4188da5d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e221275f4a5e5fdbaddc1cdc56a6b08176a8bae0fe16a0e3884e865c7028d6da/globalmount\"" pod="minio-dev/minio" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.464140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9hsm\" (UniqueName: \"kubernetes.io/projected/858f07c9-5c19-401b-afa7-ceca4188da5d-kube-api-access-t9hsm\") pod \"minio\" (UID: \"858f07c9-5c19-401b-afa7-ceca4188da5d\") " pod="minio-dev/minio" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.494961 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f2a870f3-183e-4870-8014-081543ef5cce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2a870f3-183e-4870-8014-081543ef5cce\") pod \"minio\" (UID: \"858f07c9-5c19-401b-afa7-ceca4188da5d\") " pod="minio-dev/minio" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.605956 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 10 14:47:28 crc kubenswrapper[4727]: I1210 14:47:28.803616 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 10 14:47:28 crc kubenswrapper[4727]: W1210 14:47:28.809357 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod858f07c9_5c19_401b_afa7_ceca4188da5d.slice/crio-8d4bde4b6ebf01ce998ee5b526a2a51449be25e730f3b5c433f701d9cb159124 WatchSource:0}: Error finding container 8d4bde4b6ebf01ce998ee5b526a2a51449be25e730f3b5c433f701d9cb159124: Status 404 returned error can't find the container with id 8d4bde4b6ebf01ce998ee5b526a2a51449be25e730f3b5c433f701d9cb159124 Dec 10 14:47:29 crc kubenswrapper[4727]: I1210 14:47:29.316643 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"858f07c9-5c19-401b-afa7-ceca4188da5d","Type":"ContainerStarted","Data":"8d4bde4b6ebf01ce998ee5b526a2a51449be25e730f3b5c433f701d9cb159124"} Dec 10 14:47:30 crc kubenswrapper[4727]: I1210 14:47:30.325226 4727 generic.go:334] "Generic (PLEG): container finished" podID="163d48a4-75d1-458f-96a2-18760ac78989" containerID="38fcb67bff5041b6d32295ef18d569eb4fd3b6bcc22a50d61d3e6d3eefddda49" exitCode=0 Dec 10 14:47:30 crc kubenswrapper[4727]: I1210 14:47:30.325928 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" event={"ID":"163d48a4-75d1-458f-96a2-18760ac78989","Type":"ContainerDied","Data":"38fcb67bff5041b6d32295ef18d569eb4fd3b6bcc22a50d61d3e6d3eefddda49"} Dec 10 14:47:34 crc kubenswrapper[4727]: I1210 14:47:34.462242 4727 generic.go:334] "Generic (PLEG): container finished" podID="163d48a4-75d1-458f-96a2-18760ac78989" containerID="d8b5d87c7310a5e6a445c7f8f415449508556369ed2cf0e100975942c69dad69" exitCode=0 Dec 10 14:47:34 crc kubenswrapper[4727]: I1210 14:47:34.462289 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" event={"ID":"163d48a4-75d1-458f-96a2-18760ac78989","Type":"ContainerDied","Data":"d8b5d87c7310a5e6a445c7f8f415449508556369ed2cf0e100975942c69dad69"} Dec 10 14:47:34 crc kubenswrapper[4727]: I1210 14:47:34.464922 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"858f07c9-5c19-401b-afa7-ceca4188da5d","Type":"ContainerStarted","Data":"6983d034fc39fe029b4317cbadac88110c95df5ada9a2d6b77189177966aa8b9"} Dec 10 14:47:34 crc kubenswrapper[4727]: I1210 14:47:34.500538 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.255256403 podStartE2EDuration="9.50048226s" podCreationTimestamp="2025-12-10 14:47:25 +0000 UTC" firstStartedPulling="2025-12-10 14:47:28.811190925 +0000 UTC m=+953.005965467" lastFinishedPulling="2025-12-10 14:47:34.056416782 +0000 UTC m=+958.251191324" observedRunningTime="2025-12-10 14:47:34.498984932 +0000 UTC m=+958.693759484" watchObservedRunningTime="2025-12-10 14:47:34.50048226 +0000 UTC m=+958.695256802" Dec 10 14:47:35 crc kubenswrapper[4727]: I1210 14:47:35.919785 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.091056 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8bzh\" (UniqueName: \"kubernetes.io/projected/163d48a4-75d1-458f-96a2-18760ac78989-kube-api-access-x8bzh\") pod \"163d48a4-75d1-458f-96a2-18760ac78989\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.091178 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-bundle\") pod \"163d48a4-75d1-458f-96a2-18760ac78989\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.091339 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-util\") pod \"163d48a4-75d1-458f-96a2-18760ac78989\" (UID: \"163d48a4-75d1-458f-96a2-18760ac78989\") " Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.092416 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-bundle" (OuterVolumeSpecName: "bundle") pod "163d48a4-75d1-458f-96a2-18760ac78989" (UID: "163d48a4-75d1-458f-96a2-18760ac78989"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.100413 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163d48a4-75d1-458f-96a2-18760ac78989-kube-api-access-x8bzh" (OuterVolumeSpecName: "kube-api-access-x8bzh") pod "163d48a4-75d1-458f-96a2-18760ac78989" (UID: "163d48a4-75d1-458f-96a2-18760ac78989"). InnerVolumeSpecName "kube-api-access-x8bzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.102266 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-util" (OuterVolumeSpecName: "util") pod "163d48a4-75d1-458f-96a2-18760ac78989" (UID: "163d48a4-75d1-458f-96a2-18760ac78989"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.192981 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.193014 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/163d48a4-75d1-458f-96a2-18760ac78989-util\") on node \"crc\" DevicePath \"\"" Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.193026 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8bzh\" (UniqueName: \"kubernetes.io/projected/163d48a4-75d1-458f-96a2-18760ac78989-kube-api-access-x8bzh\") on node \"crc\" DevicePath \"\"" Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.484347 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" event={"ID":"163d48a4-75d1-458f-96a2-18760ac78989","Type":"ContainerDied","Data":"07aa9ec3ad8870b8ece3416bec6f4f4f1fb0f92fb50873404a0f55a98e208d03"} Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.484678 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07aa9ec3ad8870b8ece3416bec6f4f4f1fb0f92fb50873404a0f55a98e208d03" Dec 10 14:47:36 crc kubenswrapper[4727]: I1210 14:47:36.484456 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.923609 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp"] Dec 10 14:47:45 crc kubenswrapper[4727]: E1210 14:47:45.925449 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163d48a4-75d1-458f-96a2-18760ac78989" containerName="util" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.925544 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="163d48a4-75d1-458f-96a2-18760ac78989" containerName="util" Dec 10 14:47:45 crc kubenswrapper[4727]: E1210 14:47:45.925645 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163d48a4-75d1-458f-96a2-18760ac78989" containerName="pull" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.925720 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="163d48a4-75d1-458f-96a2-18760ac78989" containerName="pull" Dec 10 14:47:45 crc kubenswrapper[4727]: E1210 14:47:45.925796 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163d48a4-75d1-458f-96a2-18760ac78989" containerName="extract" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.925864 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="163d48a4-75d1-458f-96a2-18760ac78989" containerName="extract" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.926072 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="163d48a4-75d1-458f-96a2-18760ac78989" containerName="extract" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.927029 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.930759 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.930832 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.930780 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.931445 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-25zdk" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.932657 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.934927 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Dec 10 14:47:45 crc kubenswrapper[4727]: I1210 14:47:45.945244 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp"] Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.105629 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-apiservice-cert\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.106232 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/faf1ebd4-74be-4049-b5cb-26e049d50e6a-manager-config\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.106370 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.106514 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7x9\" (UniqueName: \"kubernetes.io/projected/faf1ebd4-74be-4049-b5cb-26e049d50e6a-kube-api-access-kb7x9\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.106661 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-webhook-cert\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.210930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb7x9\" (UniqueName: \"kubernetes.io/projected/faf1ebd4-74be-4049-b5cb-26e049d50e6a-kube-api-access-kb7x9\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.211010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-webhook-cert\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.211080 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/faf1ebd4-74be-4049-b5cb-26e049d50e6a-manager-config\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.211105 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-apiservice-cert\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.211149 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.213705 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/faf1ebd4-74be-4049-b5cb-26e049d50e6a-manager-config\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.231748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-apiservice-cert\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.241357 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.241546 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-webhook-cert\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.254002 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb7x9\" (UniqueName: \"kubernetes.io/projected/faf1ebd4-74be-4049-b5cb-26e049d50e6a-kube-api-access-kb7x9\") pod \"loki-operator-controller-manager-654644598b-sl6jp\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:46 crc kubenswrapper[4727]: I1210 14:47:46.551458 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:47:47 crc kubenswrapper[4727]: I1210 14:47:47.067459 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp"] Dec 10 14:47:47 crc kubenswrapper[4727]: W1210 14:47:47.070979 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf1ebd4_74be_4049_b5cb_26e049d50e6a.slice/crio-356950bf26080e66b66617621e641e897f162a5c8b9abe4e3d0003aa8bbd375f WatchSource:0}: Error finding container 356950bf26080e66b66617621e641e897f162a5c8b9abe4e3d0003aa8bbd375f: Status 404 returned error can't find the container with id 356950bf26080e66b66617621e641e897f162a5c8b9abe4e3d0003aa8bbd375f Dec 10 14:47:47 crc kubenswrapper[4727]: I1210 14:47:47.593944 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" event={"ID":"faf1ebd4-74be-4049-b5cb-26e049d50e6a","Type":"ContainerStarted","Data":"356950bf26080e66b66617621e641e897f162a5c8b9abe4e3d0003aa8bbd375f"} Dec 10 14:47:57 crc kubenswrapper[4727]: I1210 14:47:57.664137 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" event={"ID":"faf1ebd4-74be-4049-b5cb-26e049d50e6a","Type":"ContainerStarted","Data":"fcd551065edc865d7cff5d62a80941242ff2d7221823c93105bc5f97b1590008"} Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.668387 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x4s6s"] Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.671032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.685861 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4s6s"] Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.743820 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mp4n\" (UniqueName: \"kubernetes.io/projected/7b969113-91be-460a-b1da-dcd546d469c5-kube-api-access-7mp4n\") pod \"community-operators-x4s6s\" (UID: \"7b969113-91be-460a-b1da-dcd546d469c5\") " pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.744141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b969113-91be-460a-b1da-dcd546d469c5-utilities\") pod \"community-operators-x4s6s\" (UID: \"7b969113-91be-460a-b1da-dcd546d469c5\") " pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.744271 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b969113-91be-460a-b1da-dcd546d469c5-catalog-content\") pod \"community-operators-x4s6s\" (UID: \"7b969113-91be-460a-b1da-dcd546d469c5\") " pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.845259 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mp4n\" (UniqueName: \"kubernetes.io/projected/7b969113-91be-460a-b1da-dcd546d469c5-kube-api-access-7mp4n\") pod \"community-operators-x4s6s\" (UID: \"7b969113-91be-460a-b1da-dcd546d469c5\") " pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.845295 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b969113-91be-460a-b1da-dcd546d469c5-utilities\") pod \"community-operators-x4s6s\" (UID: \"7b969113-91be-460a-b1da-dcd546d469c5\") " pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.845357 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b969113-91be-460a-b1da-dcd546d469c5-catalog-content\") pod \"community-operators-x4s6s\" (UID: \"7b969113-91be-460a-b1da-dcd546d469c5\") " pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.845758 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b969113-91be-460a-b1da-dcd546d469c5-catalog-content\") pod \"community-operators-x4s6s\" (UID: \"7b969113-91be-460a-b1da-dcd546d469c5\") " pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.846346 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b969113-91be-460a-b1da-dcd546d469c5-utilities\") pod \"community-operators-x4s6s\" (UID: \"7b969113-91be-460a-b1da-dcd546d469c5\") " pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.869557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mp4n\" (UniqueName: \"kubernetes.io/projected/7b969113-91be-460a-b1da-dcd546d469c5-kube-api-access-7mp4n\") pod \"community-operators-x4s6s\" (UID: \"7b969113-91be-460a-b1da-dcd546d469c5\") " pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:02 crc kubenswrapper[4727]: I1210 14:48:02.988324 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:08 crc kubenswrapper[4727]: I1210 14:48:08.812471 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4s6s"] Dec 10 14:48:09 crc kubenswrapper[4727]: I1210 14:48:09.765241 4727 generic.go:334] "Generic (PLEG): container finished" podID="7b969113-91be-460a-b1da-dcd546d469c5" containerID="638ef69c76eb5a63c454d723083f7532b5ea3eaa288345644ccfe4a133217df0" exitCode=0 Dec 10 14:48:09 crc kubenswrapper[4727]: I1210 14:48:09.765612 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4s6s" event={"ID":"7b969113-91be-460a-b1da-dcd546d469c5","Type":"ContainerDied","Data":"638ef69c76eb5a63c454d723083f7532b5ea3eaa288345644ccfe4a133217df0"} Dec 10 14:48:09 crc kubenswrapper[4727]: I1210 14:48:09.765645 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4s6s" event={"ID":"7b969113-91be-460a-b1da-dcd546d469c5","Type":"ContainerStarted","Data":"7de8956c95b2a8821fdb4b01ab903c40d0f93ef0f09d3e30152788552e4f2c40"} Dec 10 14:48:09 crc kubenswrapper[4727]: I1210 14:48:09.771586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" event={"ID":"faf1ebd4-74be-4049-b5cb-26e049d50e6a","Type":"ContainerStarted","Data":"a43465958679d0bd79aa6777f6d062c864e4b39d7869ed450c2cb6e8410d46fe"} Dec 10 14:48:09 crc kubenswrapper[4727]: I1210 14:48:09.773503 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:48:09 crc kubenswrapper[4727]: I1210 14:48:09.773792 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 14:48:18 crc kubenswrapper[4727]: I1210 14:48:18.831461 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4s6s" event={"ID":"7b969113-91be-460a-b1da-dcd546d469c5","Type":"ContainerStarted","Data":"b70da0e5dd6184e7234b718cd23f71c0426543e99f4f4fa6ff85eb97df64214d"} Dec 10 14:48:18 crc kubenswrapper[4727]: I1210 14:48:18.860719 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" podStartSLOduration=12.084264165 podStartE2EDuration="33.860693744s" podCreationTimestamp="2025-12-10 14:47:45 +0000 UTC" firstStartedPulling="2025-12-10 14:47:47.074036915 +0000 UTC m=+971.268811457" lastFinishedPulling="2025-12-10 14:48:08.850466494 +0000 UTC m=+993.045241036" observedRunningTime="2025-12-10 14:48:09.817454089 +0000 UTC m=+994.012228631" watchObservedRunningTime="2025-12-10 14:48:18.860693744 +0000 UTC m=+1003.055468306" Dec 10 14:48:19 crc kubenswrapper[4727]: E1210 14:48:19.648573 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b969113_91be_460a_b1da_dcd546d469c5.slice/crio-b70da0e5dd6184e7234b718cd23f71c0426543e99f4f4fa6ff85eb97df64214d.scope\": RecentStats: unable to find data in memory cache]" Dec 10 14:48:19 crc kubenswrapper[4727]: I1210 14:48:19.840515 4727 generic.go:334] "Generic (PLEG): container finished" podID="7b969113-91be-460a-b1da-dcd546d469c5" containerID="b70da0e5dd6184e7234b718cd23f71c0426543e99f4f4fa6ff85eb97df64214d" exitCode=0 Dec 10 14:48:19 crc kubenswrapper[4727]: I1210 14:48:19.840647 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4s6s" event={"ID":"7b969113-91be-460a-b1da-dcd546d469c5","Type":"ContainerDied","Data":"b70da0e5dd6184e7234b718cd23f71c0426543e99f4f4fa6ff85eb97df64214d"} Dec 10 14:48:21 crc kubenswrapper[4727]: I1210 14:48:21.860561 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4s6s" event={"ID":"7b969113-91be-460a-b1da-dcd546d469c5","Type":"ContainerStarted","Data":"d1a095a37555664160b6a8e73a28ee32f58dbb563135e12a6d3bb6533c70c5aa"} Dec 10 14:48:21 crc kubenswrapper[4727]: I1210 14:48:21.883789 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x4s6s" podStartSLOduration=8.325535005 podStartE2EDuration="19.883725429s" podCreationTimestamp="2025-12-10 14:48:02 +0000 UTC" firstStartedPulling="2025-12-10 14:48:09.76733718 +0000 UTC m=+993.962111722" lastFinishedPulling="2025-12-10 14:48:21.325527604 +0000 UTC m=+1005.520302146" observedRunningTime="2025-12-10 14:48:21.882358254 +0000 UTC m=+1006.077132786" watchObservedRunningTime="2025-12-10 14:48:21.883725429 +0000 UTC m=+1006.078499971" Dec 10 14:48:22 crc kubenswrapper[4727]: I1210 14:48:22.990007 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:22 crc kubenswrapper[4727]: I1210 14:48:22.990588 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:24 crc kubenswrapper[4727]: I1210 14:48:24.225394 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x4s6s" podUID="7b969113-91be-460a-b1da-dcd546d469c5" containerName="registry-server" probeResult="failure" output=< Dec 10 14:48:24 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 14:48:24 crc kubenswrapper[4727]: > Dec 10 14:48:33 crc kubenswrapper[4727]: I1210 14:48:33.050808 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:33 crc kubenswrapper[4727]: I1210 14:48:33.101638 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x4s6s" Dec 10 14:48:33 crc kubenswrapper[4727]: I1210 14:48:33.678089 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4s6s"] Dec 10 14:48:33 crc kubenswrapper[4727]: I1210 14:48:33.851800 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bf29c"] Dec 10 14:48:33 crc kubenswrapper[4727]: I1210 14:48:33.852282 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bf29c" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerName="registry-server" containerID="cri-o://810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8" gracePeriod=2 Dec 10 14:48:33 crc kubenswrapper[4727]: E1210 14:48:33.988055 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8 is running failed: container process not found" containerID="810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:48:33 crc kubenswrapper[4727]: E1210 14:48:33.989193 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8 is running failed: container process not found" containerID="810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:48:33 crc kubenswrapper[4727]: E1210 14:48:33.991190 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8 is running failed: container process not found" containerID="810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:48:33 crc kubenswrapper[4727]: E1210 14:48:33.991246 4727 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-bf29c" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerName="registry-server" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.717076 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.839251 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mqw7\" (UniqueName: \"kubernetes.io/projected/da553bbf-7e26-45f1-80d9-aed40900c3e4-kube-api-access-4mqw7\") pod \"da553bbf-7e26-45f1-80d9-aed40900c3e4\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.839322 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-catalog-content\") pod \"da553bbf-7e26-45f1-80d9-aed40900c3e4\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.839362 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-utilities\") pod \"da553bbf-7e26-45f1-80d9-aed40900c3e4\" (UID: \"da553bbf-7e26-45f1-80d9-aed40900c3e4\") " Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.840345 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-utilities" (OuterVolumeSpecName: "utilities") pod "da553bbf-7e26-45f1-80d9-aed40900c3e4" (UID: "da553bbf-7e26-45f1-80d9-aed40900c3e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.844512 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da553bbf-7e26-45f1-80d9-aed40900c3e4-kube-api-access-4mqw7" (OuterVolumeSpecName: "kube-api-access-4mqw7") pod "da553bbf-7e26-45f1-80d9-aed40900c3e4" (UID: "da553bbf-7e26-45f1-80d9-aed40900c3e4"). InnerVolumeSpecName "kube-api-access-4mqw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.907587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da553bbf-7e26-45f1-80d9-aed40900c3e4" (UID: "da553bbf-7e26-45f1-80d9-aed40900c3e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.940579 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mqw7\" (UniqueName: \"kubernetes.io/projected/da553bbf-7e26-45f1-80d9-aed40900c3e4-kube-api-access-4mqw7\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.940620 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.940632 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da553bbf-7e26-45f1-80d9-aed40900c3e4-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.949267 4727 generic.go:334] "Generic (PLEG): container finished" podID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerID="810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8" exitCode=0 Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.949386 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf29c" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.949368 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf29c" event={"ID":"da553bbf-7e26-45f1-80d9-aed40900c3e4","Type":"ContainerDied","Data":"810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8"} Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.949534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf29c" event={"ID":"da553bbf-7e26-45f1-80d9-aed40900c3e4","Type":"ContainerDied","Data":"a0db79db3c3f5337b48229d880b0fb2deb438ffa7af6d4cff2e40be39922ecea"} Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.949586 4727 scope.go:117] "RemoveContainer" containerID="810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.968781 4727 scope.go:117] "RemoveContainer" containerID="de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663" Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.981872 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bf29c"] Dec 10 14:48:34 crc kubenswrapper[4727]: I1210 14:48:34.990929 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bf29c"] Dec 10 14:48:35 crc kubenswrapper[4727]: I1210 14:48:35.002506 4727 scope.go:117] "RemoveContainer" containerID="7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe" Dec 10 14:48:35 crc kubenswrapper[4727]: I1210 14:48:35.020808 4727 scope.go:117] "RemoveContainer" containerID="810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8" Dec 10 14:48:35 crc kubenswrapper[4727]: E1210 14:48:35.021534 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8\": container with ID starting with 810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8 not found: ID does not exist" containerID="810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8" Dec 10 14:48:35 crc kubenswrapper[4727]: I1210 14:48:35.022007 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8"} err="failed to get container status \"810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8\": rpc error: code = NotFound desc = could not find container \"810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8\": container with ID starting with 810e456f64973b9b8afae6395e812eec6c3aa33ebbc9e801164e4ddd5c3d36d8 not found: ID does not exist" Dec 10 14:48:35 crc kubenswrapper[4727]: I1210 14:48:35.022168 4727 scope.go:117] "RemoveContainer" containerID="de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663" Dec 10 14:48:35 crc kubenswrapper[4727]: E1210 14:48:35.022625 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663\": container with ID starting with de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663 not found: ID does not exist" containerID="de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663" Dec 10 14:48:35 crc kubenswrapper[4727]: I1210 14:48:35.022680 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663"} err="failed to get container status \"de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663\": rpc error: code = NotFound desc = could not find container \"de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663\": container with ID starting with de90ec23919727d1dc170cbb17db5a2aabd5336f76bd26faad94f588ab68d663 not found: ID does not exist" Dec 10 14:48:35 crc kubenswrapper[4727]: I1210 14:48:35.022716 4727 scope.go:117] "RemoveContainer" containerID="7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe" Dec 10 14:48:35 crc kubenswrapper[4727]: E1210 14:48:35.023275 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe\": container with ID starting with 7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe not found: ID does not exist" containerID="7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe" Dec 10 14:48:35 crc kubenswrapper[4727]: I1210 14:48:35.023303 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe"} err="failed to get container status \"7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe\": rpc error: code = NotFound desc = could not find container \"7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe\": container with ID starting with 7d5d98883642eed90cdf5b886f74097ab48fda113f59e290651ba726ce200dbe not found: ID does not exist" Dec 10 14:48:36 crc kubenswrapper[4727]: I1210 14:48:36.573115 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" path="/var/lib/kubelet/pods/da553bbf-7e26-45f1-80d9-aed40900c3e4/volumes" Dec 10 14:48:37 crc kubenswrapper[4727]: I1210 14:48:37.723544 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:48:37 crc kubenswrapper[4727]: I1210 14:48:37.724013 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.704432 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz"] Dec 10 14:48:38 crc kubenswrapper[4727]: E1210 14:48:38.705092 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerName="extract-content" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.705116 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerName="extract-content" Dec 10 14:48:38 crc kubenswrapper[4727]: E1210 14:48:38.705136 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerName="extract-utilities" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.705142 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerName="extract-utilities" Dec 10 14:48:38 crc kubenswrapper[4727]: E1210 14:48:38.705156 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerName="registry-server" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.705162 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerName="registry-server" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.705280 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="da553bbf-7e26-45f1-80d9-aed40900c3e4" containerName="registry-server" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.706280 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.708534 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.748909 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz"] Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.810087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.810151 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkm7k\" (UniqueName: \"kubernetes.io/projected/f54db9e5-372c-4dda-a4f8-2802f691871c-kube-api-access-bkm7k\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.810206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.911540 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.911648 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.911678 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkm7k\" (UniqueName: \"kubernetes.io/projected/f54db9e5-372c-4dda-a4f8-2802f691871c-kube-api-access-bkm7k\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.912372 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.912767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:38 crc kubenswrapper[4727]: I1210 14:48:38.936614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkm7k\" (UniqueName: \"kubernetes.io/projected/f54db9e5-372c-4dda-a4f8-2802f691871c-kube-api-access-bkm7k\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:39 crc kubenswrapper[4727]: I1210 14:48:39.050501 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:39 crc kubenswrapper[4727]: I1210 14:48:39.391250 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz"] Dec 10 14:48:39 crc kubenswrapper[4727]: I1210 14:48:39.987992 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" event={"ID":"f54db9e5-372c-4dda-a4f8-2802f691871c","Type":"ContainerStarted","Data":"022de5a5b0df7966b6f7ec56e65048e640fa6c8c37ccb91c2392447e92e3c01d"} Dec 10 14:48:41 crc kubenswrapper[4727]: I1210 14:48:41.000252 4727 generic.go:334] "Generic (PLEG): container finished" podID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerID="a3003b6018858418fdcb26e52d4ac9433f53a06edfbbdd35e3006acee5725cf6" exitCode=0 Dec 10 14:48:41 crc kubenswrapper[4727]: I1210 14:48:41.000305 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" event={"ID":"f54db9e5-372c-4dda-a4f8-2802f691871c","Type":"ContainerDied","Data":"a3003b6018858418fdcb26e52d4ac9433f53a06edfbbdd35e3006acee5725cf6"} Dec 10 14:48:43 crc kubenswrapper[4727]: I1210 14:48:43.016064 4727 generic.go:334] "Generic (PLEG): container finished" podID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerID="cc0d74f6d64502070c5b37671cadcdfb844f7519fda918afc9203a94809525b4" exitCode=0 Dec 10 14:48:43 crc kubenswrapper[4727]: I1210 14:48:43.016157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" event={"ID":"f54db9e5-372c-4dda-a4f8-2802f691871c","Type":"ContainerDied","Data":"cc0d74f6d64502070c5b37671cadcdfb844f7519fda918afc9203a94809525b4"} Dec 10 14:48:43 crc kubenswrapper[4727]: I1210 14:48:43.864593 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-42f5w"] Dec 10 14:48:43 crc kubenswrapper[4727]: I1210 14:48:43.866398 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:43 crc kubenswrapper[4727]: I1210 14:48:43.881347 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42f5w"] Dec 10 14:48:43 crc kubenswrapper[4727]: I1210 14:48:43.928686 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-utilities\") pod \"certified-operators-42f5w\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:43 crc kubenswrapper[4727]: I1210 14:48:43.928749 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pp7c\" (UniqueName: \"kubernetes.io/projected/812e46f9-c13d-487e-be94-e858193c2369-kube-api-access-8pp7c\") pod \"certified-operators-42f5w\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:43 crc kubenswrapper[4727]: I1210 14:48:43.928880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-catalog-content\") pod \"certified-operators-42f5w\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.024802 4727 generic.go:334] "Generic (PLEG): container finished" podID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerID="047f5872ba93b2a2bd732d9b5e1f8a1204edb21098baeb78aa9c794703c21f87" exitCode=0 Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.024848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" event={"ID":"f54db9e5-372c-4dda-a4f8-2802f691871c","Type":"ContainerDied","Data":"047f5872ba93b2a2bd732d9b5e1f8a1204edb21098baeb78aa9c794703c21f87"} Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.030786 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-utilities\") pod \"certified-operators-42f5w\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.030842 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pp7c\" (UniqueName: \"kubernetes.io/projected/812e46f9-c13d-487e-be94-e858193c2369-kube-api-access-8pp7c\") pod \"certified-operators-42f5w\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.030888 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-catalog-content\") pod \"certified-operators-42f5w\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.031450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-utilities\") pod \"certified-operators-42f5w\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.031459 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-catalog-content\") pod \"certified-operators-42f5w\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.052842 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pp7c\" (UniqueName: \"kubernetes.io/projected/812e46f9-c13d-487e-be94-e858193c2369-kube-api-access-8pp7c\") pod \"certified-operators-42f5w\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.182401 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:44 crc kubenswrapper[4727]: I1210 14:48:44.846188 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42f5w"] Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.032724 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42f5w" event={"ID":"812e46f9-c13d-487e-be94-e858193c2369","Type":"ContainerStarted","Data":"1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19"} Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.032774 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42f5w" event={"ID":"812e46f9-c13d-487e-be94-e858193c2369","Type":"ContainerStarted","Data":"e16b2a63ed7da520999c1b6326825461189627934d6efdfb32e5279a4e921922"} Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.274919 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.451030 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-bundle\") pod \"f54db9e5-372c-4dda-a4f8-2802f691871c\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.451095 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkm7k\" (UniqueName: \"kubernetes.io/projected/f54db9e5-372c-4dda-a4f8-2802f691871c-kube-api-access-bkm7k\") pod \"f54db9e5-372c-4dda-a4f8-2802f691871c\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.451193 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-util\") pod \"f54db9e5-372c-4dda-a4f8-2802f691871c\" (UID: \"f54db9e5-372c-4dda-a4f8-2802f691871c\") " Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.452122 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-bundle" (OuterVolumeSpecName: "bundle") pod "f54db9e5-372c-4dda-a4f8-2802f691871c" (UID: "f54db9e5-372c-4dda-a4f8-2802f691871c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.468542 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54db9e5-372c-4dda-a4f8-2802f691871c-kube-api-access-bkm7k" (OuterVolumeSpecName: "kube-api-access-bkm7k") pod "f54db9e5-372c-4dda-a4f8-2802f691871c" (UID: "f54db9e5-372c-4dda-a4f8-2802f691871c"). InnerVolumeSpecName "kube-api-access-bkm7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.470365 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-util" (OuterVolumeSpecName: "util") pod "f54db9e5-372c-4dda-a4f8-2802f691871c" (UID: "f54db9e5-372c-4dda-a4f8-2802f691871c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.552876 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-util\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.552944 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f54db9e5-372c-4dda-a4f8-2802f691871c-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:45 crc kubenswrapper[4727]: I1210 14:48:45.552956 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkm7k\" (UniqueName: \"kubernetes.io/projected/f54db9e5-372c-4dda-a4f8-2802f691871c-kube-api-access-bkm7k\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:46 crc kubenswrapper[4727]: I1210 14:48:46.041383 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" Dec 10 14:48:46 crc kubenswrapper[4727]: I1210 14:48:46.041394 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz" event={"ID":"f54db9e5-372c-4dda-a4f8-2802f691871c","Type":"ContainerDied","Data":"022de5a5b0df7966b6f7ec56e65048e640fa6c8c37ccb91c2392447e92e3c01d"} Dec 10 14:48:46 crc kubenswrapper[4727]: I1210 14:48:46.041440 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="022de5a5b0df7966b6f7ec56e65048e640fa6c8c37ccb91c2392447e92e3c01d" Dec 10 14:48:46 crc kubenswrapper[4727]: I1210 14:48:46.042714 4727 generic.go:334] "Generic (PLEG): container finished" podID="812e46f9-c13d-487e-be94-e858193c2369" containerID="1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19" exitCode=0 Dec 10 14:48:46 crc kubenswrapper[4727]: I1210 14:48:46.042766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42f5w" event={"ID":"812e46f9-c13d-487e-be94-e858193c2369","Type":"ContainerDied","Data":"1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19"} Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.256824 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bcw"] Dec 10 14:48:47 crc kubenswrapper[4727]: E1210 14:48:47.257531 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerName="pull" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.257549 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerName="pull" Dec 10 14:48:47 crc kubenswrapper[4727]: E1210 14:48:47.257559 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerName="util" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.257565 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerName="util" Dec 10 14:48:47 crc kubenswrapper[4727]: E1210 14:48:47.257586 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerName="extract" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.257591 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerName="extract" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.257688 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54db9e5-372c-4dda-a4f8-2802f691871c" containerName="extract" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.258489 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.275607 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bcw"] Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.275847 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skpzt\" (UniqueName: \"kubernetes.io/projected/15b69fa6-852e-4b4a-ac66-08025abf2a8f-kube-api-access-skpzt\") pod \"redhat-marketplace-g8bcw\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.275898 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-utilities\") pod \"redhat-marketplace-g8bcw\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.276147 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-catalog-content\") pod \"redhat-marketplace-g8bcw\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.377341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-catalog-content\") pod \"redhat-marketplace-g8bcw\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.377402 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skpzt\" (UniqueName: \"kubernetes.io/projected/15b69fa6-852e-4b4a-ac66-08025abf2a8f-kube-api-access-skpzt\") pod \"redhat-marketplace-g8bcw\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.377427 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-utilities\") pod \"redhat-marketplace-g8bcw\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.377885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-catalog-content\") pod \"redhat-marketplace-g8bcw\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.378065 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-utilities\") pod \"redhat-marketplace-g8bcw\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.402947 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skpzt\" (UniqueName: \"kubernetes.io/projected/15b69fa6-852e-4b4a-ac66-08025abf2a8f-kube-api-access-skpzt\") pod \"redhat-marketplace-g8bcw\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.575776 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:47 crc kubenswrapper[4727]: I1210 14:48:47.858316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bcw"] Dec 10 14:48:48 crc kubenswrapper[4727]: I1210 14:48:48.056121 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42f5w" event={"ID":"812e46f9-c13d-487e-be94-e858193c2369","Type":"ContainerStarted","Data":"ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1"} Dec 10 14:48:48 crc kubenswrapper[4727]: I1210 14:48:48.057888 4727 generic.go:334] "Generic (PLEG): container finished" podID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerID="f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d" exitCode=0 Dec 10 14:48:48 crc kubenswrapper[4727]: I1210 14:48:48.057963 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bcw" event={"ID":"15b69fa6-852e-4b4a-ac66-08025abf2a8f","Type":"ContainerDied","Data":"f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d"} Dec 10 14:48:48 crc kubenswrapper[4727]: I1210 14:48:48.057989 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bcw" event={"ID":"15b69fa6-852e-4b4a-ac66-08025abf2a8f","Type":"ContainerStarted","Data":"b78dc0b52f7541c0adbf69b78a4470bd0a79fcc4b2aa546659631b659c0d5a73"} Dec 10 14:48:49 crc kubenswrapper[4727]: I1210 14:48:49.067776 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bcw" event={"ID":"15b69fa6-852e-4b4a-ac66-08025abf2a8f","Type":"ContainerStarted","Data":"2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf"} Dec 10 14:48:49 crc kubenswrapper[4727]: I1210 14:48:49.070312 4727 generic.go:334] "Generic (PLEG): container finished" podID="812e46f9-c13d-487e-be94-e858193c2369" containerID="ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1" exitCode=0 Dec 10 14:48:49 crc kubenswrapper[4727]: I1210 14:48:49.070465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42f5w" event={"ID":"812e46f9-c13d-487e-be94-e858193c2369","Type":"ContainerDied","Data":"ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1"} Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.025192 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6"] Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.026496 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6" Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.028500 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.028926 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lbpbj" Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.029045 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.041595 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6"] Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.078578 4727 generic.go:334] "Generic (PLEG): container finished" podID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerID="2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf" exitCode=0 Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.078638 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bcw" event={"ID":"15b69fa6-852e-4b4a-ac66-08025abf2a8f","Type":"ContainerDied","Data":"2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf"} Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.190085 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmcz7\" (UniqueName: \"kubernetes.io/projected/de820475-3267-47cf-8db9-e6294484117c-kube-api-access-lmcz7\") pod \"nmstate-operator-5b5b58f5c8-9g8g6\" (UID: \"de820475-3267-47cf-8db9-e6294484117c\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6" Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.291667 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmcz7\" (UniqueName: \"kubernetes.io/projected/de820475-3267-47cf-8db9-e6294484117c-kube-api-access-lmcz7\") pod \"nmstate-operator-5b5b58f5c8-9g8g6\" (UID: \"de820475-3267-47cf-8db9-e6294484117c\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6" Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.314021 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmcz7\" (UniqueName: \"kubernetes.io/projected/de820475-3267-47cf-8db9-e6294484117c-kube-api-access-lmcz7\") pod \"nmstate-operator-5b5b58f5c8-9g8g6\" (UID: \"de820475-3267-47cf-8db9-e6294484117c\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6" Dec 10 14:48:50 crc kubenswrapper[4727]: I1210 14:48:50.345355 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6" Dec 10 14:48:51 crc kubenswrapper[4727]: I1210 14:48:51.024461 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6"] Dec 10 14:48:51 crc kubenswrapper[4727]: W1210 14:48:51.081716 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde820475_3267_47cf_8db9_e6294484117c.slice/crio-39f7d909d002696effeff7cecd776c086d8f6410eff80a7677569f9c35323f65 WatchSource:0}: Error finding container 39f7d909d002696effeff7cecd776c086d8f6410eff80a7677569f9c35323f65: Status 404 returned error can't find the container with id 39f7d909d002696effeff7cecd776c086d8f6410eff80a7677569f9c35323f65 Dec 10 14:48:52 crc kubenswrapper[4727]: I1210 14:48:52.091176 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6" event={"ID":"de820475-3267-47cf-8db9-e6294484117c","Type":"ContainerStarted","Data":"39f7d909d002696effeff7cecd776c086d8f6410eff80a7677569f9c35323f65"} Dec 10 14:48:53 crc kubenswrapper[4727]: I1210 14:48:53.098563 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42f5w" event={"ID":"812e46f9-c13d-487e-be94-e858193c2369","Type":"ContainerStarted","Data":"a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391"} Dec 10 14:48:53 crc kubenswrapper[4727]: I1210 14:48:53.101729 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bcw" event={"ID":"15b69fa6-852e-4b4a-ac66-08025abf2a8f","Type":"ContainerStarted","Data":"ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0"} Dec 10 14:48:53 crc kubenswrapper[4727]: I1210 14:48:53.165033 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-42f5w" podStartSLOduration=3.896602314 podStartE2EDuration="10.16497952s" podCreationTimestamp="2025-12-10 14:48:43 +0000 UTC" firstStartedPulling="2025-12-10 14:48:46.044292041 +0000 UTC m=+1030.239066583" lastFinishedPulling="2025-12-10 14:48:52.312669247 +0000 UTC m=+1036.507443789" observedRunningTime="2025-12-10 14:48:53.161473741 +0000 UTC m=+1037.356248293" watchObservedRunningTime="2025-12-10 14:48:53.16497952 +0000 UTC m=+1037.359754072" Dec 10 14:48:54 crc kubenswrapper[4727]: I1210 14:48:54.128793 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g8bcw" podStartSLOduration=2.658897423 podStartE2EDuration="7.12877577s" podCreationTimestamp="2025-12-10 14:48:47 +0000 UTC" firstStartedPulling="2025-12-10 14:48:48.059562246 +0000 UTC m=+1032.254336778" lastFinishedPulling="2025-12-10 14:48:52.529440583 +0000 UTC m=+1036.724215125" observedRunningTime="2025-12-10 14:48:54.124496872 +0000 UTC m=+1038.319271414" watchObservedRunningTime="2025-12-10 14:48:54.12877577 +0000 UTC m=+1038.323550312" Dec 10 14:48:54 crc kubenswrapper[4727]: I1210 14:48:54.184126 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:54 crc kubenswrapper[4727]: I1210 14:48:54.184254 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:48:55 crc kubenswrapper[4727]: I1210 14:48:55.116808 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6" event={"ID":"de820475-3267-47cf-8db9-e6294484117c","Type":"ContainerStarted","Data":"e236cab635f41b548b5795bb5c993661b423ebe3eef05e5ca7c4b016609c4364"} Dec 10 14:48:55 crc kubenswrapper[4727]: I1210 14:48:55.147150 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-9g8g6" podStartSLOduration=1.35258297 podStartE2EDuration="5.147129667s" podCreationTimestamp="2025-12-10 14:48:50 +0000 UTC" firstStartedPulling="2025-12-10 14:48:51.086800626 +0000 UTC m=+1035.281575168" lastFinishedPulling="2025-12-10 14:48:54.881347323 +0000 UTC m=+1039.076121865" observedRunningTime="2025-12-10 14:48:55.141091585 +0000 UTC m=+1039.335866127" watchObservedRunningTime="2025-12-10 14:48:55.147129667 +0000 UTC m=+1039.341904199" Dec 10 14:48:55 crc kubenswrapper[4727]: I1210 14:48:55.236865 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-42f5w" podUID="812e46f9-c13d-487e-be94-e858193c2369" containerName="registry-server" probeResult="failure" output=< Dec 10 14:48:55 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 14:48:55 crc kubenswrapper[4727]: > Dec 10 14:48:57 crc kubenswrapper[4727]: I1210 14:48:57.576689 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:57 crc kubenswrapper[4727]: I1210 14:48:57.576744 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:57 crc kubenswrapper[4727]: I1210 14:48:57.616624 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:58 crc kubenswrapper[4727]: I1210 14:48:58.179761 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.333208 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.334577 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.336127 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-lh7gv" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.352483 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.363000 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.363987 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.366828 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.400930 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.413463 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sbrdr"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.414506 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.453967 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtzx\" (UniqueName: \"kubernetes.io/projected/bee6b7dc-2e3a-4ae1-bb4b-27a411edee96-kube-api-access-fhtzx\") pod \"nmstate-webhook-5f6d4c5ccb-x999q\" (UID: \"bee6b7dc-2e3a-4ae1-bb4b-27a411edee96\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.454330 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bee6b7dc-2e3a-4ae1-bb4b-27a411edee96-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-x999q\" (UID: \"bee6b7dc-2e3a-4ae1-bb4b-27a411edee96\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.454719 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwnvm\" (UniqueName: \"kubernetes.io/projected/40f66a34-f6b1-472c-ae2b-494c4ccb8735-kube-api-access-cwnvm\") pod \"nmstate-metrics-7f946cbc9-6ttgs\" (UID: \"40f66a34-f6b1-472c-ae2b-494c4ccb8735\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.501431 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.502287 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.504956 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hfl58" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.510311 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.510310 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.516770 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.556355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cd926915-18e8-430d-8329-a56205b43546-ovs-socket\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.556434 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cd926915-18e8-430d-8329-a56205b43546-nmstate-lock\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.556470 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bee6b7dc-2e3a-4ae1-bb4b-27a411edee96-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-x999q\" (UID: \"bee6b7dc-2e3a-4ae1-bb4b-27a411edee96\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.556490 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwnvm\" (UniqueName: \"kubernetes.io/projected/40f66a34-f6b1-472c-ae2b-494c4ccb8735-kube-api-access-cwnvm\") pod \"nmstate-metrics-7f946cbc9-6ttgs\" (UID: \"40f66a34-f6b1-472c-ae2b-494c4ccb8735\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.556611 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtzx\" (UniqueName: \"kubernetes.io/projected/bee6b7dc-2e3a-4ae1-bb4b-27a411edee96-kube-api-access-fhtzx\") pod \"nmstate-webhook-5f6d4c5ccb-x999q\" (UID: \"bee6b7dc-2e3a-4ae1-bb4b-27a411edee96\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.556708 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrrvz\" (UniqueName: \"kubernetes.io/projected/cd926915-18e8-430d-8329-a56205b43546-kube-api-access-nrrvz\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: E1210 14:48:59.556703 4727 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 10 14:48:59 crc kubenswrapper[4727]: E1210 14:48:59.556820 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bee6b7dc-2e3a-4ae1-bb4b-27a411edee96-tls-key-pair podName:bee6b7dc-2e3a-4ae1-bb4b-27a411edee96 nodeName:}" failed. No retries permitted until 2025-12-10 14:49:00.056760973 +0000 UTC m=+1044.251535595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/bee6b7dc-2e3a-4ae1-bb4b-27a411edee96-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-x999q" (UID: "bee6b7dc-2e3a-4ae1-bb4b-27a411edee96") : secret "openshift-nmstate-webhook" not found Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.556849 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cd926915-18e8-430d-8329-a56205b43546-dbus-socket\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.581185 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwnvm\" (UniqueName: \"kubernetes.io/projected/40f66a34-f6b1-472c-ae2b-494c4ccb8735-kube-api-access-cwnvm\") pod \"nmstate-metrics-7f946cbc9-6ttgs\" (UID: \"40f66a34-f6b1-472c-ae2b-494c4ccb8735\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.583457 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtzx\" (UniqueName: \"kubernetes.io/projected/bee6b7dc-2e3a-4ae1-bb4b-27a411edee96-kube-api-access-fhtzx\") pod \"nmstate-webhook-5f6d4c5ccb-x999q\" (UID: \"bee6b7dc-2e3a-4ae1-bb4b-27a411edee96\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.653825 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658369 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cd926915-18e8-430d-8329-a56205b43546-dbus-socket\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658420 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cd926915-18e8-430d-8329-a56205b43546-ovs-socket\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658449 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa45914b-29fb-49a1-8a3b-f29d4f0dedc2-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-kflbl\" (UID: \"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658475 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptw97\" (UniqueName: \"kubernetes.io/projected/fa45914b-29fb-49a1-8a3b-f29d4f0dedc2-kube-api-access-ptw97\") pod \"nmstate-console-plugin-7fbb5f6569-kflbl\" (UID: \"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658508 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cd926915-18e8-430d-8329-a56205b43546-nmstate-lock\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658616 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cd926915-18e8-430d-8329-a56205b43546-nmstate-lock\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cd926915-18e8-430d-8329-a56205b43546-ovs-socket\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658703 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cd926915-18e8-430d-8329-a56205b43546-dbus-socket\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658862 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrrvz\" (UniqueName: \"kubernetes.io/projected/cd926915-18e8-430d-8329-a56205b43546-kube-api-access-nrrvz\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.658960 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fa45914b-29fb-49a1-8a3b-f29d4f0dedc2-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-kflbl\" (UID: \"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.691733 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrrvz\" (UniqueName: \"kubernetes.io/projected/cd926915-18e8-430d-8329-a56205b43546-kube-api-access-nrrvz\") pod \"nmstate-handler-sbrdr\" (UID: \"cd926915-18e8-430d-8329-a56205b43546\") " pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.733482 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.765722 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fa45914b-29fb-49a1-8a3b-f29d4f0dedc2-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-kflbl\" (UID: \"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.765789 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa45914b-29fb-49a1-8a3b-f29d4f0dedc2-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-kflbl\" (UID: \"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.765849 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptw97\" (UniqueName: \"kubernetes.io/projected/fa45914b-29fb-49a1-8a3b-f29d4f0dedc2-kube-api-access-ptw97\") pod \"nmstate-console-plugin-7fbb5f6569-kflbl\" (UID: \"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.767363 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fa45914b-29fb-49a1-8a3b-f29d4f0dedc2-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-kflbl\" (UID: \"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.771059 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa45914b-29fb-49a1-8a3b-f29d4f0dedc2-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-kflbl\" (UID: \"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.792592 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptw97\" (UniqueName: \"kubernetes.io/projected/fa45914b-29fb-49a1-8a3b-f29d4f0dedc2-kube-api-access-ptw97\") pod \"nmstate-console-plugin-7fbb5f6569-kflbl\" (UID: \"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.813981 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-774dff957-4j66h"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.815062 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-774dff957-4j66h" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.816827 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-774dff957-4j66h"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.823801 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.892864 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bcw"] Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.969007 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-console-serving-cert\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.969314 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbxgn\" (UniqueName: \"kubernetes.io/projected/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-kube-api-access-dbxgn\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.969355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-oauth-serving-cert\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.969372 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-trusted-ca-bundle\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.969407 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-service-ca\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.969450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-console-config\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:48:59 crc kubenswrapper[4727]: I1210 14:48:59.969492 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-console-oauth-config\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.087851 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-console-config\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.087947 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-console-oauth-config\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.088010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bee6b7dc-2e3a-4ae1-bb4b-27a411edee96-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-x999q\" (UID: \"bee6b7dc-2e3a-4ae1-bb4b-27a411edee96\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.088040 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-console-serving-cert\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.088063 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbxgn\" (UniqueName: \"kubernetes.io/projected/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-kube-api-access-dbxgn\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.088101 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-oauth-serving-cert\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.088127 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-trusted-ca-bundle\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.088170 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-service-ca\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.089134 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-service-ca\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.089730 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-console-config\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.095208 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-console-oauth-config\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.095350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-oauth-serving-cert\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.096339 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-trusted-ca-bundle\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.098532 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-console-serving-cert\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.099681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bee6b7dc-2e3a-4ae1-bb4b-27a411edee96-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-x999q\" (UID: \"bee6b7dc-2e3a-4ae1-bb4b-27a411edee96\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.127465 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbxgn\" (UniqueName: \"kubernetes.io/projected/c2ff9ff9-604c-4ecc-b0e3-63382f01a59d-kube-api-access-dbxgn\") pod \"console-774dff957-4j66h\" (UID: \"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d\") " pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.158272 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.176833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sbrdr" event={"ID":"cd926915-18e8-430d-8329-a56205b43546","Type":"ContainerStarted","Data":"31927961f454adc0a0a9756d01b62c610f98cedf3048aa1271d8d5cb950eb9e2"} Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.177030 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g8bcw" podUID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerName="registry-server" containerID="cri-o://ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0" gracePeriod=2 Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.281655 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.558002 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs"] Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.666558 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl"] Dec 10 14:49:00 crc kubenswrapper[4727]: W1210 14:49:00.671764 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa45914b_29fb_49a1_8a3b_f29d4f0dedc2.slice/crio-297a22bd5ac5e02f77ee0583ad83c2fbbb78cf003b46aec946558dadec0b6155 WatchSource:0}: Error finding container 297a22bd5ac5e02f77ee0583ad83c2fbbb78cf003b46aec946558dadec0b6155: Status 404 returned error can't find the container with id 297a22bd5ac5e02f77ee0583ad83c2fbbb78cf003b46aec946558dadec0b6155 Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.712371 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q"] Dec 10 14:49:00 crc kubenswrapper[4727]: I1210 14:49:00.744127 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-774dff957-4j66h"] Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.054278 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.184320 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" event={"ID":"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2","Type":"ContainerStarted","Data":"297a22bd5ac5e02f77ee0583ad83c2fbbb78cf003b46aec946558dadec0b6155"} Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.185886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" event={"ID":"bee6b7dc-2e3a-4ae1-bb4b-27a411edee96","Type":"ContainerStarted","Data":"555a9f8dbf065f6fa59cd232c1a107bad1291874c83f3358a4666c2adaffb8c9"} Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.187206 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs" event={"ID":"40f66a34-f6b1-472c-ae2b-494c4ccb8735","Type":"ContainerStarted","Data":"4ab9b071c71f586ed0b8f3471bf4ee8e664ba6625746dcfa8b70c66c570b7360"} Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.192995 4727 generic.go:334] "Generic (PLEG): container finished" podID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerID="ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0" exitCode=0 Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.193100 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bcw" event={"ID":"15b69fa6-852e-4b4a-ac66-08025abf2a8f","Type":"ContainerDied","Data":"ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0"} Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.193136 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bcw" event={"ID":"15b69fa6-852e-4b4a-ac66-08025abf2a8f","Type":"ContainerDied","Data":"b78dc0b52f7541c0adbf69b78a4470bd0a79fcc4b2aa546659631b659c0d5a73"} Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.193167 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8bcw" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.193177 4727 scope.go:117] "RemoveContainer" containerID="ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.197281 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-774dff957-4j66h" event={"ID":"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d","Type":"ContainerStarted","Data":"f5c55782058daba21812fdae65638f19f72fb767dc8d0b4221a8dac134668a64"} Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.197326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-774dff957-4j66h" event={"ID":"c2ff9ff9-604c-4ecc-b0e3-63382f01a59d","Type":"ContainerStarted","Data":"561b90deaaa781d65a4cac1137b1d5c84aeec552ac70d4af932bf4c49033ae41"} Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.203663 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skpzt\" (UniqueName: \"kubernetes.io/projected/15b69fa6-852e-4b4a-ac66-08025abf2a8f-kube-api-access-skpzt\") pod \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.203734 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-utilities\") pod \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.203779 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-catalog-content\") pod \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\" (UID: \"15b69fa6-852e-4b4a-ac66-08025abf2a8f\") " Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.206116 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-utilities" (OuterVolumeSpecName: "utilities") pod "15b69fa6-852e-4b4a-ac66-08025abf2a8f" (UID: "15b69fa6-852e-4b4a-ac66-08025abf2a8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.211088 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b69fa6-852e-4b4a-ac66-08025abf2a8f-kube-api-access-skpzt" (OuterVolumeSpecName: "kube-api-access-skpzt") pod "15b69fa6-852e-4b4a-ac66-08025abf2a8f" (UID: "15b69fa6-852e-4b4a-ac66-08025abf2a8f"). InnerVolumeSpecName "kube-api-access-skpzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.222922 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-774dff957-4j66h" podStartSLOduration=2.222886928 podStartE2EDuration="2.222886928s" podCreationTimestamp="2025-12-10 14:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:49:01.220673432 +0000 UTC m=+1045.415447994" watchObservedRunningTime="2025-12-10 14:49:01.222886928 +0000 UTC m=+1045.417661470" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.233961 4727 scope.go:117] "RemoveContainer" containerID="2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.236011 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15b69fa6-852e-4b4a-ac66-08025abf2a8f" (UID: "15b69fa6-852e-4b4a-ac66-08025abf2a8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.254630 4727 scope.go:117] "RemoveContainer" containerID="f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.274153 4727 scope.go:117] "RemoveContainer" containerID="ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0" Dec 10 14:49:01 crc kubenswrapper[4727]: E1210 14:49:01.274742 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0\": container with ID starting with ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0 not found: ID does not exist" containerID="ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.274807 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0"} err="failed to get container status \"ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0\": rpc error: code = NotFound desc = could not find container \"ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0\": container with ID starting with ab154812ddadd7cc9be5b040d8023c304c5ccd37877e62206a8d8b05f7aee2c0 not found: ID does not exist" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.274845 4727 scope.go:117] "RemoveContainer" containerID="2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf" Dec 10 14:49:01 crc kubenswrapper[4727]: E1210 14:49:01.275225 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf\": container with ID starting with 2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf not found: ID does not exist" containerID="2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.275251 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf"} err="failed to get container status \"2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf\": rpc error: code = NotFound desc = could not find container \"2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf\": container with ID starting with 2dc12eed37f9fa50e075bdd89498e93f566a7a0fff7edb349a7c71e57b69b5bf not found: ID does not exist" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.275270 4727 scope.go:117] "RemoveContainer" containerID="f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d" Dec 10 14:49:01 crc kubenswrapper[4727]: E1210 14:49:01.275697 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d\": container with ID starting with f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d not found: ID does not exist" containerID="f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.275717 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d"} err="failed to get container status \"f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d\": rpc error: code = NotFound desc = could not find container \"f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d\": container with ID starting with f304203bac146184686fac4fc84d396da2584de34414c991dc26bac2ab2a225d not found: ID does not exist" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.305885 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.305939 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skpzt\" (UniqueName: \"kubernetes.io/projected/15b69fa6-852e-4b4a-ac66-08025abf2a8f-kube-api-access-skpzt\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.305953 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15b69fa6-852e-4b4a-ac66-08025abf2a8f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.531036 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bcw"] Dec 10 14:49:01 crc kubenswrapper[4727]: I1210 14:49:01.536758 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bcw"] Dec 10 14:49:02 crc kubenswrapper[4727]: I1210 14:49:02.574245 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" path="/var/lib/kubelet/pods/15b69fa6-852e-4b4a-ac66-08025abf2a8f/volumes" Dec 10 14:49:03 crc kubenswrapper[4727]: I1210 14:49:03.223222 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" event={"ID":"bee6b7dc-2e3a-4ae1-bb4b-27a411edee96","Type":"ContainerStarted","Data":"9b428470e16aca065625642f09776fe1ec2a95f096482e8a2d042e083087e6ad"} Dec 10 14:49:03 crc kubenswrapper[4727]: I1210 14:49:03.223846 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:49:03 crc kubenswrapper[4727]: I1210 14:49:03.226307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sbrdr" event={"ID":"cd926915-18e8-430d-8329-a56205b43546","Type":"ContainerStarted","Data":"a4754201ae4cab5a231211d17465f12df5775ec00da813ca1fc60e7235ae5a96"} Dec 10 14:49:03 crc kubenswrapper[4727]: I1210 14:49:03.226383 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:49:03 crc kubenswrapper[4727]: I1210 14:49:03.232132 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs" event={"ID":"40f66a34-f6b1-472c-ae2b-494c4ccb8735","Type":"ContainerStarted","Data":"43f34a8527f60d0bce3f1860ed5775ae20bcdc4a9c0ec624a81183d77085b796"} Dec 10 14:49:03 crc kubenswrapper[4727]: I1210 14:49:03.247704 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" podStartSLOduration=2.514127943 podStartE2EDuration="4.24767366s" podCreationTimestamp="2025-12-10 14:48:59 +0000 UTC" firstStartedPulling="2025-12-10 14:49:00.744547992 +0000 UTC m=+1044.939322534" lastFinishedPulling="2025-12-10 14:49:02.478093709 +0000 UTC m=+1046.672868251" observedRunningTime="2025-12-10 14:49:03.241172746 +0000 UTC m=+1047.435947288" watchObservedRunningTime="2025-12-10 14:49:03.24767366 +0000 UTC m=+1047.442448192" Dec 10 14:49:03 crc kubenswrapper[4727]: I1210 14:49:03.260565 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sbrdr" podStartSLOduration=1.560386398 podStartE2EDuration="4.260535955s" podCreationTimestamp="2025-12-10 14:48:59 +0000 UTC" firstStartedPulling="2025-12-10 14:48:59.776070044 +0000 UTC m=+1043.970844586" lastFinishedPulling="2025-12-10 14:49:02.476219601 +0000 UTC m=+1046.670994143" observedRunningTime="2025-12-10 14:49:03.259860468 +0000 UTC m=+1047.454635020" watchObservedRunningTime="2025-12-10 14:49:03.260535955 +0000 UTC m=+1047.455310497" Dec 10 14:49:04 crc kubenswrapper[4727]: I1210 14:49:04.285453 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:49:04 crc kubenswrapper[4727]: I1210 14:49:04.334924 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:49:05 crc kubenswrapper[4727]: I1210 14:49:05.260964 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" event={"ID":"fa45914b-29fb-49a1-8a3b-f29d4f0dedc2","Type":"ContainerStarted","Data":"dfeb9871529489328e9e85cc8d8cd91624fb00303642c83fe0d5929483f80c4d"} Dec 10 14:49:05 crc kubenswrapper[4727]: I1210 14:49:05.285018 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kflbl" podStartSLOduration=2.8372743849999997 podStartE2EDuration="6.284889496s" podCreationTimestamp="2025-12-10 14:48:59 +0000 UTC" firstStartedPulling="2025-12-10 14:49:00.690781106 +0000 UTC m=+1044.885555648" lastFinishedPulling="2025-12-10 14:49:04.138396217 +0000 UTC m=+1048.333170759" observedRunningTime="2025-12-10 14:49:05.278370441 +0000 UTC m=+1049.473144983" watchObservedRunningTime="2025-12-10 14:49:05.284889496 +0000 UTC m=+1049.479664038" Dec 10 14:49:06 crc kubenswrapper[4727]: I1210 14:49:06.269849 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs" event={"ID":"40f66a34-f6b1-472c-ae2b-494c4ccb8735","Type":"ContainerStarted","Data":"1cc1c55aba4a51ad67cce2db961847f8e4d01d704309527fcb59543c8b8fa705"} Dec 10 14:49:06 crc kubenswrapper[4727]: I1210 14:49:06.288521 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-6ttgs" podStartSLOduration=2.277680079 podStartE2EDuration="7.28849461s" podCreationTimestamp="2025-12-10 14:48:59 +0000 UTC" firstStartedPulling="2025-12-10 14:49:00.569516587 +0000 UTC m=+1044.764291129" lastFinishedPulling="2025-12-10 14:49:05.580331108 +0000 UTC m=+1049.775105660" observedRunningTime="2025-12-10 14:49:06.287940576 +0000 UTC m=+1050.482715138" watchObservedRunningTime="2025-12-10 14:49:06.28849461 +0000 UTC m=+1050.483269152" Dec 10 14:49:06 crc kubenswrapper[4727]: I1210 14:49:06.657371 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42f5w"] Dec 10 14:49:06 crc kubenswrapper[4727]: I1210 14:49:06.657661 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-42f5w" podUID="812e46f9-c13d-487e-be94-e858193c2369" containerName="registry-server" containerID="cri-o://a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391" gracePeriod=2 Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.027578 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.083954 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-utilities\") pod \"812e46f9-c13d-487e-be94-e858193c2369\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.084025 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-catalog-content\") pod \"812e46f9-c13d-487e-be94-e858193c2369\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.084057 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pp7c\" (UniqueName: \"kubernetes.io/projected/812e46f9-c13d-487e-be94-e858193c2369-kube-api-access-8pp7c\") pod \"812e46f9-c13d-487e-be94-e858193c2369\" (UID: \"812e46f9-c13d-487e-be94-e858193c2369\") " Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.085335 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-utilities" (OuterVolumeSpecName: "utilities") pod "812e46f9-c13d-487e-be94-e858193c2369" (UID: "812e46f9-c13d-487e-be94-e858193c2369"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.089411 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812e46f9-c13d-487e-be94-e858193c2369-kube-api-access-8pp7c" (OuterVolumeSpecName: "kube-api-access-8pp7c") pod "812e46f9-c13d-487e-be94-e858193c2369" (UID: "812e46f9-c13d-487e-be94-e858193c2369"). InnerVolumeSpecName "kube-api-access-8pp7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.139608 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "812e46f9-c13d-487e-be94-e858193c2369" (UID: "812e46f9-c13d-487e-be94-e858193c2369"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.186073 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.186129 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pp7c\" (UniqueName: \"kubernetes.io/projected/812e46f9-c13d-487e-be94-e858193c2369-kube-api-access-8pp7c\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.186145 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e46f9-c13d-487e-be94-e858193c2369-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.281691 4727 generic.go:334] "Generic (PLEG): container finished" podID="812e46f9-c13d-487e-be94-e858193c2369" containerID="a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391" exitCode=0 Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.281746 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42f5w" event={"ID":"812e46f9-c13d-487e-be94-e858193c2369","Type":"ContainerDied","Data":"a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391"} Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.281799 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42f5w" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.281825 4727 scope.go:117] "RemoveContainer" containerID="a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.281806 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42f5w" event={"ID":"812e46f9-c13d-487e-be94-e858193c2369","Type":"ContainerDied","Data":"e16b2a63ed7da520999c1b6326825461189627934d6efdfb32e5279a4e921922"} Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.302086 4727 scope.go:117] "RemoveContainer" containerID="ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.326575 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42f5w"] Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.332045 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-42f5w"] Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.347808 4727 scope.go:117] "RemoveContainer" containerID="1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.364852 4727 scope.go:117] "RemoveContainer" containerID="a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391" Dec 10 14:49:07 crc kubenswrapper[4727]: E1210 14:49:07.365387 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391\": container with ID starting with a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391 not found: ID does not exist" containerID="a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.365474 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391"} err="failed to get container status \"a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391\": rpc error: code = NotFound desc = could not find container \"a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391\": container with ID starting with a10cf921f8792ba85700804b077c8610e4aedace308904a83a29c88cd2478391 not found: ID does not exist" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.365549 4727 scope.go:117] "RemoveContainer" containerID="ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1" Dec 10 14:49:07 crc kubenswrapper[4727]: E1210 14:49:07.365846 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1\": container with ID starting with ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1 not found: ID does not exist" containerID="ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.365892 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1"} err="failed to get container status \"ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1\": rpc error: code = NotFound desc = could not find container \"ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1\": container with ID starting with ac0af4a5565aeb58e33b09770a688f43cbc66a6d454b7ba1380757272eee71a1 not found: ID does not exist" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.365930 4727 scope.go:117] "RemoveContainer" containerID="1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19" Dec 10 14:49:07 crc kubenswrapper[4727]: E1210 14:49:07.366205 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19\": container with ID starting with 1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19 not found: ID does not exist" containerID="1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.366290 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19"} err="failed to get container status \"1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19\": rpc error: code = NotFound desc = could not find container \"1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19\": container with ID starting with 1256da40929c1fbb62c9ddf551641ce2a3153366cd8ed9cdfe8f06c4d6251b19 not found: ID does not exist" Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.724254 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:49:07 crc kubenswrapper[4727]: I1210 14:49:07.724325 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:49:08 crc kubenswrapper[4727]: I1210 14:49:08.570216 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812e46f9-c13d-487e-be94-e858193c2369" path="/var/lib/kubelet/pods/812e46f9-c13d-487e-be94-e858193c2369/volumes" Dec 10 14:49:09 crc kubenswrapper[4727]: I1210 14:49:09.757892 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sbrdr" Dec 10 14:49:10 crc kubenswrapper[4727]: I1210 14:49:10.159317 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:10 crc kubenswrapper[4727]: I1210 14:49:10.159501 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:10 crc kubenswrapper[4727]: I1210 14:49:10.165834 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:10 crc kubenswrapper[4727]: I1210 14:49:10.309459 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-774dff957-4j66h" Dec 10 14:49:10 crc kubenswrapper[4727]: I1210 14:49:10.391396 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-swj5s"] Dec 10 14:49:20 crc kubenswrapper[4727]: I1210 14:49:20.288046 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-x999q" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.440080 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-swj5s" podUID="6d8cde10-5565-4980-a4e2-a30f26707a0e" containerName="console" containerID="cri-o://2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc" gracePeriod=15 Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.512838 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b"] Dec 10 14:49:35 crc kubenswrapper[4727]: E1210 14:49:35.513171 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerName="extract-utilities" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.513183 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerName="extract-utilities" Dec 10 14:49:35 crc kubenswrapper[4727]: E1210 14:49:35.513208 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerName="extract-content" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.513214 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerName="extract-content" Dec 10 14:49:35 crc kubenswrapper[4727]: E1210 14:49:35.513225 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerName="registry-server" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.513232 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerName="registry-server" Dec 10 14:49:35 crc kubenswrapper[4727]: E1210 14:49:35.513238 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812e46f9-c13d-487e-be94-e858193c2369" containerName="extract-utilities" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.513244 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="812e46f9-c13d-487e-be94-e858193c2369" containerName="extract-utilities" Dec 10 14:49:35 crc kubenswrapper[4727]: E1210 14:49:35.513252 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812e46f9-c13d-487e-be94-e858193c2369" containerName="extract-content" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.513258 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="812e46f9-c13d-487e-be94-e858193c2369" containerName="extract-content" Dec 10 14:49:35 crc kubenswrapper[4727]: E1210 14:49:35.513268 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812e46f9-c13d-487e-be94-e858193c2369" containerName="registry-server" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.513275 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="812e46f9-c13d-487e-be94-e858193c2369" containerName="registry-server" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.513383 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="812e46f9-c13d-487e-be94-e858193c2369" containerName="registry-server" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.513397 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b69fa6-852e-4b4a-ac66-08025abf2a8f" containerName="registry-server" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.514367 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.517688 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.524184 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b"] Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.711968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfvt\" (UniqueName: \"kubernetes.io/projected/ace248eb-6c0e-465a-a21f-c1b5508cecab-kube-api-access-kqfvt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.712345 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.712451 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.813673 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqfvt\" (UniqueName: \"kubernetes.io/projected/ace248eb-6c0e-465a-a21f-c1b5508cecab-kube-api-access-kqfvt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.813794 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.813886 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.814417 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.814449 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.849005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqfvt\" (UniqueName: \"kubernetes.io/projected/ace248eb-6c0e-465a-a21f-c1b5508cecab-kube-api-access-kqfvt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.925362 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-swj5s_6d8cde10-5565-4980-a4e2-a30f26707a0e/console/0.log" Dec 10 14:49:35 crc kubenswrapper[4727]: I1210 14:49:35.925435 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.117305 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4s2z\" (UniqueName: \"kubernetes.io/projected/6d8cde10-5565-4980-a4e2-a30f26707a0e-kube-api-access-t4s2z\") pod \"6d8cde10-5565-4980-a4e2-a30f26707a0e\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.117388 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-oauth-serving-cert\") pod \"6d8cde10-5565-4980-a4e2-a30f26707a0e\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.117421 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-oauth-config\") pod \"6d8cde10-5565-4980-a4e2-a30f26707a0e\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.117514 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-serving-cert\") pod \"6d8cde10-5565-4980-a4e2-a30f26707a0e\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.117536 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-config\") pod \"6d8cde10-5565-4980-a4e2-a30f26707a0e\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.117590 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-service-ca\") pod \"6d8cde10-5565-4980-a4e2-a30f26707a0e\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.117614 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-trusted-ca-bundle\") pod \"6d8cde10-5565-4980-a4e2-a30f26707a0e\" (UID: \"6d8cde10-5565-4980-a4e2-a30f26707a0e\") " Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.118445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6d8cde10-5565-4980-a4e2-a30f26707a0e" (UID: "6d8cde10-5565-4980-a4e2-a30f26707a0e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.118451 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-service-ca" (OuterVolumeSpecName: "service-ca") pod "6d8cde10-5565-4980-a4e2-a30f26707a0e" (UID: "6d8cde10-5565-4980-a4e2-a30f26707a0e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.118446 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6d8cde10-5565-4980-a4e2-a30f26707a0e" (UID: "6d8cde10-5565-4980-a4e2-a30f26707a0e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.118636 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.118656 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.118670 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.118954 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-config" (OuterVolumeSpecName: "console-config") pod "6d8cde10-5565-4980-a4e2-a30f26707a0e" (UID: "6d8cde10-5565-4980-a4e2-a30f26707a0e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.120831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6d8cde10-5565-4980-a4e2-a30f26707a0e" (UID: "6d8cde10-5565-4980-a4e2-a30f26707a0e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.120990 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8cde10-5565-4980-a4e2-a30f26707a0e-kube-api-access-t4s2z" (OuterVolumeSpecName: "kube-api-access-t4s2z") pod "6d8cde10-5565-4980-a4e2-a30f26707a0e" (UID: "6d8cde10-5565-4980-a4e2-a30f26707a0e"). InnerVolumeSpecName "kube-api-access-t4s2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.121556 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6d8cde10-5565-4980-a4e2-a30f26707a0e" (UID: "6d8cde10-5565-4980-a4e2-a30f26707a0e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.139499 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.219058 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4s2z\" (UniqueName: \"kubernetes.io/projected/6d8cde10-5565-4980-a4e2-a30f26707a0e-kube-api-access-t4s2z\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.219090 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.219101 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.219109 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d8cde10-5565-4980-a4e2-a30f26707a0e-console-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.424856 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b"] Dec 10 14:49:36 crc kubenswrapper[4727]: W1210 14:49:36.433347 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace248eb_6c0e_465a_a21f_c1b5508cecab.slice/crio-3358a48c729252e85521707b85c49cb3737d7add23aa8c4e3c42a6321a00680a WatchSource:0}: Error finding container 3358a48c729252e85521707b85c49cb3737d7add23aa8c4e3c42a6321a00680a: Status 404 returned error can't find the container with id 3358a48c729252e85521707b85c49cb3737d7add23aa8c4e3c42a6321a00680a Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.684832 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-swj5s_6d8cde10-5565-4980-a4e2-a30f26707a0e/console/0.log" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.684941 4727 generic.go:334] "Generic (PLEG): container finished" podID="6d8cde10-5565-4980-a4e2-a30f26707a0e" containerID="2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc" exitCode=2 Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.685287 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-swj5s" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.698214 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-swj5s" event={"ID":"6d8cde10-5565-4980-a4e2-a30f26707a0e","Type":"ContainerDied","Data":"2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc"} Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.698271 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-swj5s" event={"ID":"6d8cde10-5565-4980-a4e2-a30f26707a0e","Type":"ContainerDied","Data":"eb8dff6a41ba22b19d83a70becca8baa913c15d401b7100c72f10bb5779d00ef"} Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.698309 4727 scope.go:117] "RemoveContainer" containerID="2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.705662 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" event={"ID":"ace248eb-6c0e-465a-a21f-c1b5508cecab","Type":"ContainerStarted","Data":"3358a48c729252e85521707b85c49cb3737d7add23aa8c4e3c42a6321a00680a"} Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.747638 4727 scope.go:117] "RemoveContainer" containerID="2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc" Dec 10 14:49:36 crc kubenswrapper[4727]: E1210 14:49:36.748180 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc\": container with ID starting with 2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc not found: ID does not exist" containerID="2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.748240 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc"} err="failed to get container status \"2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc\": rpc error: code = NotFound desc = could not find container \"2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc\": container with ID starting with 2aaad9c4da898cb7baea8257a84cda887202fa758199812a8e4ea405083832bc not found: ID does not exist" Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.778384 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-swj5s"] Dec 10 14:49:36 crc kubenswrapper[4727]: I1210 14:49:36.782697 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-swj5s"] Dec 10 14:49:37 crc kubenswrapper[4727]: I1210 14:49:37.723478 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:49:37 crc kubenswrapper[4727]: I1210 14:49:37.723796 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:49:37 crc kubenswrapper[4727]: I1210 14:49:37.723838 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:49:37 crc kubenswrapper[4727]: I1210 14:49:37.724485 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4da8f537ad153791693a45193621652b13001e5dd72906f744d548921ad04f8"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:49:37 crc kubenswrapper[4727]: I1210 14:49:37.724529 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://d4da8f537ad153791693a45193621652b13001e5dd72906f744d548921ad04f8" gracePeriod=600 Dec 10 14:49:38 crc kubenswrapper[4727]: I1210 14:49:38.570188 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8cde10-5565-4980-a4e2-a30f26707a0e" path="/var/lib/kubelet/pods/6d8cde10-5565-4980-a4e2-a30f26707a0e/volumes" Dec 10 14:49:38 crc kubenswrapper[4727]: I1210 14:49:38.725328 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="d4da8f537ad153791693a45193621652b13001e5dd72906f744d548921ad04f8" exitCode=0 Dec 10 14:49:38 crc kubenswrapper[4727]: I1210 14:49:38.725424 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"d4da8f537ad153791693a45193621652b13001e5dd72906f744d548921ad04f8"} Dec 10 14:49:38 crc kubenswrapper[4727]: I1210 14:49:38.725502 4727 scope.go:117] "RemoveContainer" containerID="cd46d2062fb92e117b59daaca2ef5ffa90b444c25a1b8e3e5c4e2bdf99695cf9" Dec 10 14:49:38 crc kubenswrapper[4727]: I1210 14:49:38.731499 4727 generic.go:334] "Generic (PLEG): container finished" podID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerID="f82a50873892b9b8811d031e53d70ceedd8739ad1347d73da843a029aa8e8af8" exitCode=0 Dec 10 14:49:38 crc kubenswrapper[4727]: I1210 14:49:38.731540 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" event={"ID":"ace248eb-6c0e-465a-a21f-c1b5508cecab","Type":"ContainerDied","Data":"f82a50873892b9b8811d031e53d70ceedd8739ad1347d73da843a029aa8e8af8"} Dec 10 14:49:39 crc kubenswrapper[4727]: I1210 14:49:39.752815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"bf0c0cb5db6cbb369cba9f7cbcfb4667db68ee6a05b492c3d6b69303943d84f1"} Dec 10 14:49:40 crc kubenswrapper[4727]: I1210 14:49:40.768609 4727 generic.go:334] "Generic (PLEG): container finished" podID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerID="9c4c5caefdba7675f02e17631510f095aeb371d1f509d2c1f94092983573c20f" exitCode=0 Dec 10 14:49:40 crc kubenswrapper[4727]: I1210 14:49:40.768982 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" event={"ID":"ace248eb-6c0e-465a-a21f-c1b5508cecab","Type":"ContainerDied","Data":"9c4c5caefdba7675f02e17631510f095aeb371d1f509d2c1f94092983573c20f"} Dec 10 14:49:40 crc kubenswrapper[4727]: E1210 14:49:40.968168 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8cde10_5565_4980_a4e2_a30f26707a0e.slice/crio-eb8dff6a41ba22b19d83a70becca8baa913c15d401b7100c72f10bb5779d00ef\": RecentStats: unable to find data in memory cache]" Dec 10 14:49:41 crc kubenswrapper[4727]: I1210 14:49:41.780377 4727 generic.go:334] "Generic (PLEG): container finished" podID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerID="f99d5479e9adc59f6dc3472363c45dc32324826aae818fa6404ced5208b973a2" exitCode=0 Dec 10 14:49:41 crc kubenswrapper[4727]: I1210 14:49:41.780500 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" event={"ID":"ace248eb-6c0e-465a-a21f-c1b5508cecab","Type":"ContainerDied","Data":"f99d5479e9adc59f6dc3472363c45dc32324826aae818fa6404ced5208b973a2"} Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.035890 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.210360 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqfvt\" (UniqueName: \"kubernetes.io/projected/ace248eb-6c0e-465a-a21f-c1b5508cecab-kube-api-access-kqfvt\") pod \"ace248eb-6c0e-465a-a21f-c1b5508cecab\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.210459 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-bundle\") pod \"ace248eb-6c0e-465a-a21f-c1b5508cecab\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.210545 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-util\") pod \"ace248eb-6c0e-465a-a21f-c1b5508cecab\" (UID: \"ace248eb-6c0e-465a-a21f-c1b5508cecab\") " Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.221819 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-bundle" (OuterVolumeSpecName: "bundle") pod "ace248eb-6c0e-465a-a21f-c1b5508cecab" (UID: "ace248eb-6c0e-465a-a21f-c1b5508cecab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.222054 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace248eb-6c0e-465a-a21f-c1b5508cecab-kube-api-access-kqfvt" (OuterVolumeSpecName: "kube-api-access-kqfvt") pod "ace248eb-6c0e-465a-a21f-c1b5508cecab" (UID: "ace248eb-6c0e-465a-a21f-c1b5508cecab"). InnerVolumeSpecName "kube-api-access-kqfvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.225702 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-util" (OuterVolumeSpecName: "util") pod "ace248eb-6c0e-465a-a21f-c1b5508cecab" (UID: "ace248eb-6c0e-465a-a21f-c1b5508cecab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.311747 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-util\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.311799 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqfvt\" (UniqueName: \"kubernetes.io/projected/ace248eb-6c0e-465a-a21f-c1b5508cecab-kube-api-access-kqfvt\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.311813 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ace248eb-6c0e-465a-a21f-c1b5508cecab-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.795006 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" event={"ID":"ace248eb-6c0e-465a-a21f-c1b5508cecab","Type":"ContainerDied","Data":"3358a48c729252e85521707b85c49cb3737d7add23aa8c4e3c42a6321a00680a"} Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.795055 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3358a48c729252e85521707b85c49cb3737d7add23aa8c4e3c42a6321a00680a" Dec 10 14:49:43 crc kubenswrapper[4727]: I1210 14:49:43.795089 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b" Dec 10 14:49:51 crc kubenswrapper[4727]: E1210 14:49:51.123711 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8cde10_5565_4980_a4e2_a30f26707a0e.slice/crio-eb8dff6a41ba22b19d83a70becca8baa913c15d401b7100c72f10bb5779d00ef\": RecentStats: unable to find data in memory cache]" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.679139 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2"] Dec 10 14:49:52 crc kubenswrapper[4727]: E1210 14:49:52.679381 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerName="extract" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.679394 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerName="extract" Dec 10 14:49:52 crc kubenswrapper[4727]: E1210 14:49:52.679412 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerName="util" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.679418 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerName="util" Dec 10 14:49:52 crc kubenswrapper[4727]: E1210 14:49:52.679428 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerName="pull" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.679434 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerName="pull" Dec 10 14:49:52 crc kubenswrapper[4727]: E1210 14:49:52.679446 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8cde10-5565-4980-a4e2-a30f26707a0e" containerName="console" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.679451 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8cde10-5565-4980-a4e2-a30f26707a0e" containerName="console" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.679564 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace248eb-6c0e-465a-a21f-c1b5508cecab" containerName="extract" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.679577 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8cde10-5565-4980-a4e2-a30f26707a0e" containerName="console" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.679994 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.682519 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.682891 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.683073 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-j6lhd" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.683101 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.684033 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.695136 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2"] Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.733801 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/294cf2e6-2528-4fdd-be76-426382e72b19-apiservice-cert\") pod \"metallb-operator-controller-manager-7f4c459996-xq7b2\" (UID: \"294cf2e6-2528-4fdd-be76-426382e72b19\") " pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.733885 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/294cf2e6-2528-4fdd-be76-426382e72b19-webhook-cert\") pod \"metallb-operator-controller-manager-7f4c459996-xq7b2\" (UID: \"294cf2e6-2528-4fdd-be76-426382e72b19\") " pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.733998 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rhbg\" (UniqueName: \"kubernetes.io/projected/294cf2e6-2528-4fdd-be76-426382e72b19-kube-api-access-2rhbg\") pod \"metallb-operator-controller-manager-7f4c459996-xq7b2\" (UID: \"294cf2e6-2528-4fdd-be76-426382e72b19\") " pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.834987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/294cf2e6-2528-4fdd-be76-426382e72b19-apiservice-cert\") pod \"metallb-operator-controller-manager-7f4c459996-xq7b2\" (UID: \"294cf2e6-2528-4fdd-be76-426382e72b19\") " pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.835063 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/294cf2e6-2528-4fdd-be76-426382e72b19-webhook-cert\") pod \"metallb-operator-controller-manager-7f4c459996-xq7b2\" (UID: \"294cf2e6-2528-4fdd-be76-426382e72b19\") " pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.835145 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rhbg\" (UniqueName: \"kubernetes.io/projected/294cf2e6-2528-4fdd-be76-426382e72b19-kube-api-access-2rhbg\") pod \"metallb-operator-controller-manager-7f4c459996-xq7b2\" (UID: \"294cf2e6-2528-4fdd-be76-426382e72b19\") " pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.841778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/294cf2e6-2528-4fdd-be76-426382e72b19-apiservice-cert\") pod \"metallb-operator-controller-manager-7f4c459996-xq7b2\" (UID: \"294cf2e6-2528-4fdd-be76-426382e72b19\") " pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.843531 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/294cf2e6-2528-4fdd-be76-426382e72b19-webhook-cert\") pod \"metallb-operator-controller-manager-7f4c459996-xq7b2\" (UID: \"294cf2e6-2528-4fdd-be76-426382e72b19\") " pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:52 crc kubenswrapper[4727]: I1210 14:49:52.867301 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rhbg\" (UniqueName: \"kubernetes.io/projected/294cf2e6-2528-4fdd-be76-426382e72b19-kube-api-access-2rhbg\") pod \"metallb-operator-controller-manager-7f4c459996-xq7b2\" (UID: \"294cf2e6-2528-4fdd-be76-426382e72b19\") " pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.003338 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.036840 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-56d895889c-x5d82"] Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.043593 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.049936 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.050170 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.050312 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-rs9l2" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.121095 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56d895889c-x5d82"] Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.143268 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhl2f\" (UniqueName: \"kubernetes.io/projected/883f25c7-a8aa-47bb-9726-f5fdc14e4952-kube-api-access-vhl2f\") pod \"metallb-operator-webhook-server-56d895889c-x5d82\" (UID: \"883f25c7-a8aa-47bb-9726-f5fdc14e4952\") " pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.143369 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/883f25c7-a8aa-47bb-9726-f5fdc14e4952-webhook-cert\") pod \"metallb-operator-webhook-server-56d895889c-x5d82\" (UID: \"883f25c7-a8aa-47bb-9726-f5fdc14e4952\") " pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.143398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/883f25c7-a8aa-47bb-9726-f5fdc14e4952-apiservice-cert\") pod \"metallb-operator-webhook-server-56d895889c-x5d82\" (UID: \"883f25c7-a8aa-47bb-9726-f5fdc14e4952\") " pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.244592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhl2f\" (UniqueName: \"kubernetes.io/projected/883f25c7-a8aa-47bb-9726-f5fdc14e4952-kube-api-access-vhl2f\") pod \"metallb-operator-webhook-server-56d895889c-x5d82\" (UID: \"883f25c7-a8aa-47bb-9726-f5fdc14e4952\") " pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.244695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/883f25c7-a8aa-47bb-9726-f5fdc14e4952-webhook-cert\") pod \"metallb-operator-webhook-server-56d895889c-x5d82\" (UID: \"883f25c7-a8aa-47bb-9726-f5fdc14e4952\") " pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.244727 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/883f25c7-a8aa-47bb-9726-f5fdc14e4952-apiservice-cert\") pod \"metallb-operator-webhook-server-56d895889c-x5d82\" (UID: \"883f25c7-a8aa-47bb-9726-f5fdc14e4952\") " pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.253595 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/883f25c7-a8aa-47bb-9726-f5fdc14e4952-apiservice-cert\") pod \"metallb-operator-webhook-server-56d895889c-x5d82\" (UID: \"883f25c7-a8aa-47bb-9726-f5fdc14e4952\") " pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.260511 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/883f25c7-a8aa-47bb-9726-f5fdc14e4952-webhook-cert\") pod \"metallb-operator-webhook-server-56d895889c-x5d82\" (UID: \"883f25c7-a8aa-47bb-9726-f5fdc14e4952\") " pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.270732 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhl2f\" (UniqueName: \"kubernetes.io/projected/883f25c7-a8aa-47bb-9726-f5fdc14e4952-kube-api-access-vhl2f\") pod \"metallb-operator-webhook-server-56d895889c-x5d82\" (UID: \"883f25c7-a8aa-47bb-9726-f5fdc14e4952\") " pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.422819 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.573034 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2"] Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.863394 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" event={"ID":"294cf2e6-2528-4fdd-be76-426382e72b19","Type":"ContainerStarted","Data":"b32ebd9fb3b1869c31675d0ddeceb2fa5462a3adaf1ea877fc3905e83fd27bb3"} Dec 10 14:49:53 crc kubenswrapper[4727]: I1210 14:49:53.899430 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56d895889c-x5d82"] Dec 10 14:49:53 crc kubenswrapper[4727]: W1210 14:49:53.902518 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod883f25c7_a8aa_47bb_9726_f5fdc14e4952.slice/crio-e10db174c049177fa62b087564c7b550e88f4e22b7a2f396d92e1f6bf621fa76 WatchSource:0}: Error finding container e10db174c049177fa62b087564c7b550e88f4e22b7a2f396d92e1f6bf621fa76: Status 404 returned error can't find the container with id e10db174c049177fa62b087564c7b550e88f4e22b7a2f396d92e1f6bf621fa76 Dec 10 14:49:54 crc kubenswrapper[4727]: I1210 14:49:54.870806 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" event={"ID":"883f25c7-a8aa-47bb-9726-f5fdc14e4952","Type":"ContainerStarted","Data":"e10db174c049177fa62b087564c7b550e88f4e22b7a2f396d92e1f6bf621fa76"} Dec 10 14:49:57 crc kubenswrapper[4727]: I1210 14:49:57.897299 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" event={"ID":"294cf2e6-2528-4fdd-be76-426382e72b19","Type":"ContainerStarted","Data":"55f3273e54dcac52754da1ab0a3dc4b11fb2efeb4212527d8320382c61511e0f"} Dec 10 14:49:57 crc kubenswrapper[4727]: I1210 14:49:57.898090 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:49:59 crc kubenswrapper[4727]: I1210 14:49:59.912620 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" event={"ID":"883f25c7-a8aa-47bb-9726-f5fdc14e4952","Type":"ContainerStarted","Data":"ce3aee836918b8388c6f228d595a2e78c46f9a195290b2e9441abffb82c2438c"} Dec 10 14:49:59 crc kubenswrapper[4727]: I1210 14:49:59.913136 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:49:59 crc kubenswrapper[4727]: I1210 14:49:59.935176 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" podStartSLOduration=4.568230177 podStartE2EDuration="7.935139323s" podCreationTimestamp="2025-12-10 14:49:52 +0000 UTC" firstStartedPulling="2025-12-10 14:49:53.592466598 +0000 UTC m=+1097.787241140" lastFinishedPulling="2025-12-10 14:49:56.959375744 +0000 UTC m=+1101.154150286" observedRunningTime="2025-12-10 14:49:57.936976452 +0000 UTC m=+1102.131751004" watchObservedRunningTime="2025-12-10 14:49:59.935139323 +0000 UTC m=+1104.129913865" Dec 10 14:49:59 crc kubenswrapper[4727]: I1210 14:49:59.936633 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" podStartSLOduration=1.109517439 podStartE2EDuration="6.93662627s" podCreationTimestamp="2025-12-10 14:49:53 +0000 UTC" firstStartedPulling="2025-12-10 14:49:53.905890813 +0000 UTC m=+1098.100665355" lastFinishedPulling="2025-12-10 14:49:59.732999644 +0000 UTC m=+1103.927774186" observedRunningTime="2025-12-10 14:49:59.930873745 +0000 UTC m=+1104.125648297" watchObservedRunningTime="2025-12-10 14:49:59.93662627 +0000 UTC m=+1104.131400812" Dec 10 14:50:01 crc kubenswrapper[4727]: E1210 14:50:01.257025 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8cde10_5565_4980_a4e2_a30f26707a0e.slice/crio-eb8dff6a41ba22b19d83a70becca8baa913c15d401b7100c72f10bb5779d00ef\": RecentStats: unable to find data in memory cache]" Dec 10 14:50:11 crc kubenswrapper[4727]: E1210 14:50:11.400802 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8cde10_5565_4980_a4e2_a30f26707a0e.slice/crio-eb8dff6a41ba22b19d83a70becca8baa913c15d401b7100c72f10bb5779d00ef\": RecentStats: unable to find data in memory cache]" Dec 10 14:50:13 crc kubenswrapper[4727]: I1210 14:50:13.429689 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-56d895889c-x5d82" Dec 10 14:50:21 crc kubenswrapper[4727]: E1210 14:50:21.548523 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8cde10_5565_4980_a4e2_a30f26707a0e.slice/crio-eb8dff6a41ba22b19d83a70becca8baa913c15d401b7100c72f10bb5779d00ef\": RecentStats: unable to find data in memory cache]" Dec 10 14:50:31 crc kubenswrapper[4727]: E1210 14:50:31.675421 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8cde10_5565_4980_a4e2_a30f26707a0e.slice/crio-eb8dff6a41ba22b19d83a70becca8baa913c15d401b7100c72f10bb5779d00ef\": RecentStats: unable to find data in memory cache]" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.006221 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7f4c459996-xq7b2" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.865168 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xcqhz"] Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.868275 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.871287 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs"] Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.872107 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.872251 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.872348 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-r67hw" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.876069 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.882123 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.893758 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs"] Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.975550 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-frr-conf\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.975866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m5fk\" (UniqueName: \"kubernetes.io/projected/bcd252b1-5939-47ba-99a4-300d504b615f-kube-api-access-9m5fk\") pod \"frr-k8s-webhook-server-7fcb986d4-7b7cs\" (UID: \"bcd252b1-5939-47ba-99a4-300d504b615f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.975985 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4c9e124-30b6-42a8-9b85-bef3d1836f12-frr-startup\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.976059 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-metrics\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.976137 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcd252b1-5939-47ba-99a4-300d504b615f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-7b7cs\" (UID: \"bcd252b1-5939-47ba-99a4-300d504b615f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.976218 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4c9e124-30b6-42a8-9b85-bef3d1836f12-metrics-certs\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.976307 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxx25\" (UniqueName: \"kubernetes.io/projected/a4c9e124-30b6-42a8-9b85-bef3d1836f12-kube-api-access-rxx25\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.976391 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-frr-sockets\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.976485 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-reloader\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.994895 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-dmgwc"] Dec 10 14:50:33 crc kubenswrapper[4727]: I1210 14:50:33.996608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.001507 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.010842 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-26pqg"] Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.012205 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.015481 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.015598 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.015805 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.016128 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tpf42" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.016979 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-dmgwc"] Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078234 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/817c95ce-865f-41a5-a7bf-e88c222e8a4a-metallb-excludel2\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078281 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-reloader\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078327 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-frr-conf\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078346 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xrdv\" (UniqueName: \"kubernetes.io/projected/817c95ce-865f-41a5-a7bf-e88c222e8a4a-kube-api-access-8xrdv\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078388 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34bca420-ab68-464d-b96e-631fc55e2b41-metrics-certs\") pod \"controller-f8648f98b-dmgwc\" (UID: \"34bca420-ab68-464d-b96e-631fc55e2b41\") " pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078412 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m5fk\" (UniqueName: \"kubernetes.io/projected/bcd252b1-5939-47ba-99a4-300d504b615f-kube-api-access-9m5fk\") pod \"frr-k8s-webhook-server-7fcb986d4-7b7cs\" (UID: \"bcd252b1-5939-47ba-99a4-300d504b615f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078448 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7wj\" (UniqueName: \"kubernetes.io/projected/34bca420-ab68-464d-b96e-631fc55e2b41-kube-api-access-hr7wj\") pod \"controller-f8648f98b-dmgwc\" (UID: \"34bca420-ab68-464d-b96e-631fc55e2b41\") " pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078467 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-metrics-certs\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078489 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4c9e124-30b6-42a8-9b85-bef3d1836f12-frr-startup\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078505 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-metrics\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078525 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcd252b1-5939-47ba-99a4-300d504b615f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-7b7cs\" (UID: \"bcd252b1-5939-47ba-99a4-300d504b615f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078555 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4c9e124-30b6-42a8-9b85-bef3d1836f12-metrics-certs\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078578 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxx25\" (UniqueName: \"kubernetes.io/projected/a4c9e124-30b6-42a8-9b85-bef3d1836f12-kube-api-access-rxx25\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078599 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-memberlist\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078674 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34bca420-ab68-464d-b96e-631fc55e2b41-cert\") pod \"controller-f8648f98b-dmgwc\" (UID: \"34bca420-ab68-464d-b96e-631fc55e2b41\") " pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.078702 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-frr-sockets\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.079227 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-reloader\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.079325 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-metrics\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.079374 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-frr-sockets\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.079541 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4c9e124-30b6-42a8-9b85-bef3d1836f12-frr-conf\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.080422 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4c9e124-30b6-42a8-9b85-bef3d1836f12-frr-startup\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.084706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4c9e124-30b6-42a8-9b85-bef3d1836f12-metrics-certs\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.084750 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcd252b1-5939-47ba-99a4-300d504b615f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-7b7cs\" (UID: \"bcd252b1-5939-47ba-99a4-300d504b615f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.103023 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxx25\" (UniqueName: \"kubernetes.io/projected/a4c9e124-30b6-42a8-9b85-bef3d1836f12-kube-api-access-rxx25\") pod \"frr-k8s-xcqhz\" (UID: \"a4c9e124-30b6-42a8-9b85-bef3d1836f12\") " pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.104316 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m5fk\" (UniqueName: \"kubernetes.io/projected/bcd252b1-5939-47ba-99a4-300d504b615f-kube-api-access-9m5fk\") pod \"frr-k8s-webhook-server-7fcb986d4-7b7cs\" (UID: \"bcd252b1-5939-47ba-99a4-300d504b615f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.180495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34bca420-ab68-464d-b96e-631fc55e2b41-metrics-certs\") pod \"controller-f8648f98b-dmgwc\" (UID: \"34bca420-ab68-464d-b96e-631fc55e2b41\") " pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.180545 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7wj\" (UniqueName: \"kubernetes.io/projected/34bca420-ab68-464d-b96e-631fc55e2b41-kube-api-access-hr7wj\") pod \"controller-f8648f98b-dmgwc\" (UID: \"34bca420-ab68-464d-b96e-631fc55e2b41\") " pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.180571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-metrics-certs\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.180610 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-memberlist\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.180659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34bca420-ab68-464d-b96e-631fc55e2b41-cert\") pod \"controller-f8648f98b-dmgwc\" (UID: \"34bca420-ab68-464d-b96e-631fc55e2b41\") " pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.180684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/817c95ce-865f-41a5-a7bf-e88c222e8a4a-metallb-excludel2\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: E1210 14:50:34.180747 4727 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 10 14:50:34 crc kubenswrapper[4727]: E1210 14:50:34.180772 4727 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 10 14:50:34 crc kubenswrapper[4727]: E1210 14:50:34.180835 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-memberlist podName:817c95ce-865f-41a5-a7bf-e88c222e8a4a nodeName:}" failed. No retries permitted until 2025-12-10 14:50:34.680798176 +0000 UTC m=+1138.875572718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-memberlist") pod "speaker-26pqg" (UID: "817c95ce-865f-41a5-a7bf-e88c222e8a4a") : secret "metallb-memberlist" not found Dec 10 14:50:34 crc kubenswrapper[4727]: E1210 14:50:34.180848 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-metrics-certs podName:817c95ce-865f-41a5-a7bf-e88c222e8a4a nodeName:}" failed. No retries permitted until 2025-12-10 14:50:34.680842267 +0000 UTC m=+1138.875616809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-metrics-certs") pod "speaker-26pqg" (UID: "817c95ce-865f-41a5-a7bf-e88c222e8a4a") : secret "speaker-certs-secret" not found Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.181322 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xrdv\" (UniqueName: \"kubernetes.io/projected/817c95ce-865f-41a5-a7bf-e88c222e8a4a-kube-api-access-8xrdv\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.181657 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/817c95ce-865f-41a5-a7bf-e88c222e8a4a-metallb-excludel2\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.184328 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34bca420-ab68-464d-b96e-631fc55e2b41-metrics-certs\") pod \"controller-f8648f98b-dmgwc\" (UID: \"34bca420-ab68-464d-b96e-631fc55e2b41\") " pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.185952 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34bca420-ab68-464d-b96e-631fc55e2b41-cert\") pod \"controller-f8648f98b-dmgwc\" (UID: \"34bca420-ab68-464d-b96e-631fc55e2b41\") " pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.188990 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.211630 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xrdv\" (UniqueName: \"kubernetes.io/projected/817c95ce-865f-41a5-a7bf-e88c222e8a4a-kube-api-access-8xrdv\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.218988 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7wj\" (UniqueName: \"kubernetes.io/projected/34bca420-ab68-464d-b96e-631fc55e2b41-kube-api-access-hr7wj\") pod \"controller-f8648f98b-dmgwc\" (UID: \"34bca420-ab68-464d-b96e-631fc55e2b41\") " pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.237723 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.312173 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.694283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-metrics-certs\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.694793 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-memberlist\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: E1210 14:50:34.694948 4727 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 10 14:50:34 crc kubenswrapper[4727]: E1210 14:50:34.695010 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-memberlist podName:817c95ce-865f-41a5-a7bf-e88c222e8a4a nodeName:}" failed. No retries permitted until 2025-12-10 14:50:35.694994906 +0000 UTC m=+1139.889769448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-memberlist") pod "speaker-26pqg" (UID: "817c95ce-865f-41a5-a7bf-e88c222e8a4a") : secret "metallb-memberlist" not found Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.704656 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-metrics-certs\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.830831 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-dmgwc"] Dec 10 14:50:34 crc kubenswrapper[4727]: I1210 14:50:34.880605 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs"] Dec 10 14:50:34 crc kubenswrapper[4727]: W1210 14:50:34.889076 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcd252b1_5939_47ba_99a4_300d504b615f.slice/crio-40adb18a3fc0fc5f78baa9e3dbb55d1775b3100ae7a8f5b523b4c910e1060535 WatchSource:0}: Error finding container 40adb18a3fc0fc5f78baa9e3dbb55d1775b3100ae7a8f5b523b4c910e1060535: Status 404 returned error can't find the container with id 40adb18a3fc0fc5f78baa9e3dbb55d1775b3100ae7a8f5b523b4c910e1060535 Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.256304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-dmgwc" event={"ID":"34bca420-ab68-464d-b96e-631fc55e2b41","Type":"ContainerStarted","Data":"f3967ccc708629d85d36baf638c9e21a55c543dbdaef7e4cc75b612c38f36772"} Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.256645 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.256662 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-dmgwc" event={"ID":"34bca420-ab68-464d-b96e-631fc55e2b41","Type":"ContainerStarted","Data":"59f6b39e79db4e0c20dbd61081b67d8724c8c4e298470b576ab5b08598885b76"} Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.256674 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-dmgwc" event={"ID":"34bca420-ab68-464d-b96e-631fc55e2b41","Type":"ContainerStarted","Data":"adc78296ca99532c1883224d9d05bd60f85c15dd096da22c21dc008ddaab8a90"} Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.259707 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" event={"ID":"bcd252b1-5939-47ba-99a4-300d504b615f","Type":"ContainerStarted","Data":"40adb18a3fc0fc5f78baa9e3dbb55d1775b3100ae7a8f5b523b4c910e1060535"} Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.261583 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerStarted","Data":"49ea002dc97c173416542a54ecf5e462a7938de58a6f48c9d3108055acb9a05c"} Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.282862 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-dmgwc" podStartSLOduration=2.282824323 podStartE2EDuration="2.282824323s" podCreationTimestamp="2025-12-10 14:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:50:35.280260578 +0000 UTC m=+1139.475035120" watchObservedRunningTime="2025-12-10 14:50:35.282824323 +0000 UTC m=+1139.477598865" Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.709380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-memberlist\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.717604 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/817c95ce-865f-41a5-a7bf-e88c222e8a4a-memberlist\") pod \"speaker-26pqg\" (UID: \"817c95ce-865f-41a5-a7bf-e88c222e8a4a\") " pod="metallb-system/speaker-26pqg" Dec 10 14:50:35 crc kubenswrapper[4727]: I1210 14:50:35.836816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-26pqg" Dec 10 14:50:36 crc kubenswrapper[4727]: I1210 14:50:36.280184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-26pqg" event={"ID":"817c95ce-865f-41a5-a7bf-e88c222e8a4a","Type":"ContainerStarted","Data":"f60b95cc8f2767a1e1e6f616dee806f8a4cf8bab56f41e1d6b2b23a91afc766f"} Dec 10 14:50:37 crc kubenswrapper[4727]: I1210 14:50:37.298533 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-26pqg" event={"ID":"817c95ce-865f-41a5-a7bf-e88c222e8a4a","Type":"ContainerStarted","Data":"2180450cd9cf01400653dba2ab1210b15e46b1c2bb0cbfab4e4767f45b3c1788"} Dec 10 14:50:37 crc kubenswrapper[4727]: I1210 14:50:37.298586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-26pqg" event={"ID":"817c95ce-865f-41a5-a7bf-e88c222e8a4a","Type":"ContainerStarted","Data":"f0fd4224f6aca0cd54cb1915f4366095f453320a454c959293ec4b0de44891af"} Dec 10 14:50:37 crc kubenswrapper[4727]: I1210 14:50:37.298765 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-26pqg" Dec 10 14:50:37 crc kubenswrapper[4727]: I1210 14:50:37.345607 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-26pqg" podStartSLOduration=4.345586513 podStartE2EDuration="4.345586513s" podCreationTimestamp="2025-12-10 14:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:50:37.342557736 +0000 UTC m=+1141.537332278" watchObservedRunningTime="2025-12-10 14:50:37.345586513 +0000 UTC m=+1141.540361055" Dec 10 14:50:44 crc kubenswrapper[4727]: I1210 14:50:44.318028 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-dmgwc" Dec 10 14:50:45 crc kubenswrapper[4727]: I1210 14:50:45.410356 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" event={"ID":"bcd252b1-5939-47ba-99a4-300d504b615f","Type":"ContainerStarted","Data":"b1db359dd656e5426e914f067b4f7b728b4b8015c60b40a3606f25da6c32fc3f"} Dec 10 14:50:45 crc kubenswrapper[4727]: I1210 14:50:45.410682 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:50:45 crc kubenswrapper[4727]: I1210 14:50:45.413810 4727 generic.go:334] "Generic (PLEG): container finished" podID="a4c9e124-30b6-42a8-9b85-bef3d1836f12" containerID="68a7d039fcf0509ae74e8da6c55952e456d3687cb1a81b1840e38d96dd9c3029" exitCode=0 Dec 10 14:50:45 crc kubenswrapper[4727]: I1210 14:50:45.413891 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerDied","Data":"68a7d039fcf0509ae74e8da6c55952e456d3687cb1a81b1840e38d96dd9c3029"} Dec 10 14:50:45 crc kubenswrapper[4727]: I1210 14:50:45.450674 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" podStartSLOduration=2.493425195 podStartE2EDuration="12.45064147s" podCreationTimestamp="2025-12-10 14:50:33 +0000 UTC" firstStartedPulling="2025-12-10 14:50:34.892073177 +0000 UTC m=+1139.086847719" lastFinishedPulling="2025-12-10 14:50:44.849289452 +0000 UTC m=+1149.044063994" observedRunningTime="2025-12-10 14:50:45.43117943 +0000 UTC m=+1149.625953972" watchObservedRunningTime="2025-12-10 14:50:45.45064147 +0000 UTC m=+1149.645416012" Dec 10 14:50:46 crc kubenswrapper[4727]: I1210 14:50:46.423310 4727 generic.go:334] "Generic (PLEG): container finished" podID="a4c9e124-30b6-42a8-9b85-bef3d1836f12" containerID="a0e1854799e96b1b1cbc2659d1b6259b4dc072c1ba0e14fd56e77fe070dd8ec0" exitCode=0 Dec 10 14:50:46 crc kubenswrapper[4727]: I1210 14:50:46.423440 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerDied","Data":"a0e1854799e96b1b1cbc2659d1b6259b4dc072c1ba0e14fd56e77fe070dd8ec0"} Dec 10 14:50:47 crc kubenswrapper[4727]: I1210 14:50:47.431196 4727 generic.go:334] "Generic (PLEG): container finished" podID="a4c9e124-30b6-42a8-9b85-bef3d1836f12" containerID="86800847d6aad02db13224df98e5658a306bfd5f455d28bbb19832d33bafbda0" exitCode=0 Dec 10 14:50:47 crc kubenswrapper[4727]: I1210 14:50:47.431243 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerDied","Data":"86800847d6aad02db13224df98e5658a306bfd5f455d28bbb19832d33bafbda0"} Dec 10 14:50:48 crc kubenswrapper[4727]: I1210 14:50:48.442936 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerStarted","Data":"de7c4ea4d8ab31ec85a126d2620757154f4ec86cb6bc54d30b04beb85fc192b4"} Dec 10 14:50:48 crc kubenswrapper[4727]: I1210 14:50:48.443275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerStarted","Data":"1adaf12aa9f78fc8bf7525427fb341a8176a9cc2f9a4fc1a71be37866af13b3b"} Dec 10 14:50:48 crc kubenswrapper[4727]: I1210 14:50:48.443299 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:48 crc kubenswrapper[4727]: I1210 14:50:48.443312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerStarted","Data":"36deb3c5417a1248dd4e2a6da3d697be5cfda85cd19ae221db529e52f5811d2c"} Dec 10 14:50:48 crc kubenswrapper[4727]: I1210 14:50:48.443326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerStarted","Data":"2d753f00d23c15c7f94bdaaaf6598407524fd43d978ec45ced541dc1de86ae55"} Dec 10 14:50:48 crc kubenswrapper[4727]: I1210 14:50:48.443338 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerStarted","Data":"8a888a4943acaf2e0dbfb92b9d2a05061f29bd1100d7a92f144acdb63a01e8cf"} Dec 10 14:50:48 crc kubenswrapper[4727]: I1210 14:50:48.443349 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xcqhz" event={"ID":"a4c9e124-30b6-42a8-9b85-bef3d1836f12","Type":"ContainerStarted","Data":"8b6a9a4c35c6ad33760a55f4a76347c221075ca8b1eca78fb5419a7706d9318c"} Dec 10 14:50:48 crc kubenswrapper[4727]: I1210 14:50:48.465825 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xcqhz" podStartSLOduration=5.349467543 podStartE2EDuration="15.465804352s" podCreationTimestamp="2025-12-10 14:50:33 +0000 UTC" firstStartedPulling="2025-12-10 14:50:34.711935903 +0000 UTC m=+1138.906710445" lastFinishedPulling="2025-12-10 14:50:44.828272712 +0000 UTC m=+1149.023047254" observedRunningTime="2025-12-10 14:50:48.464252893 +0000 UTC m=+1152.659027455" watchObservedRunningTime="2025-12-10 14:50:48.465804352 +0000 UTC m=+1152.660578894" Dec 10 14:50:49 crc kubenswrapper[4727]: I1210 14:50:49.191239 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:49 crc kubenswrapper[4727]: I1210 14:50:49.236018 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:50:55 crc kubenswrapper[4727]: I1210 14:50:55.840375 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-26pqg" Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.091134 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pfqzr"] Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.093415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pfqzr" Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.097851 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pfqzr"] Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.098119 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.098147 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.109215 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rs9p6" Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.161277 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2kx\" (UniqueName: \"kubernetes.io/projected/7642b755-0985-4aa4-b5aa-e6ef4ffe8a91-kube-api-access-2f2kx\") pod \"openstack-operator-index-pfqzr\" (UID: \"7642b755-0985-4aa4-b5aa-e6ef4ffe8a91\") " pod="openstack-operators/openstack-operator-index-pfqzr" Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.263120 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f2kx\" (UniqueName: \"kubernetes.io/projected/7642b755-0985-4aa4-b5aa-e6ef4ffe8a91-kube-api-access-2f2kx\") pod \"openstack-operator-index-pfqzr\" (UID: \"7642b755-0985-4aa4-b5aa-e6ef4ffe8a91\") " pod="openstack-operators/openstack-operator-index-pfqzr" Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.295802 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f2kx\" (UniqueName: \"kubernetes.io/projected/7642b755-0985-4aa4-b5aa-e6ef4ffe8a91-kube-api-access-2f2kx\") pod \"openstack-operator-index-pfqzr\" (UID: \"7642b755-0985-4aa4-b5aa-e6ef4ffe8a91\") " pod="openstack-operators/openstack-operator-index-pfqzr" Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.419103 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pfqzr" Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.868662 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pfqzr"] Dec 10 14:50:59 crc kubenswrapper[4727]: I1210 14:50:59.882230 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 14:51:00 crc kubenswrapper[4727]: I1210 14:51:00.524054 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pfqzr" event={"ID":"7642b755-0985-4aa4-b5aa-e6ef4ffe8a91","Type":"ContainerStarted","Data":"941855237a5de76a572e65b3b3316bb1cd2f511538f8cadc204a467fe399762a"} Dec 10 14:51:01 crc kubenswrapper[4727]: I1210 14:51:01.257588 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pfqzr"] Dec 10 14:51:01 crc kubenswrapper[4727]: I1210 14:51:01.862624 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ss25w"] Dec 10 14:51:01 crc kubenswrapper[4727]: I1210 14:51:01.863583 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ss25w" Dec 10 14:51:01 crc kubenswrapper[4727]: I1210 14:51:01.887698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ss25w"] Dec 10 14:51:02 crc kubenswrapper[4727]: I1210 14:51:02.000114 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwzcr\" (UniqueName: \"kubernetes.io/projected/64b105d9-469b-4db7-9358-60e9ed040aee-kube-api-access-zwzcr\") pod \"openstack-operator-index-ss25w\" (UID: \"64b105d9-469b-4db7-9358-60e9ed040aee\") " pod="openstack-operators/openstack-operator-index-ss25w" Dec 10 14:51:02 crc kubenswrapper[4727]: I1210 14:51:02.101173 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwzcr\" (UniqueName: \"kubernetes.io/projected/64b105d9-469b-4db7-9358-60e9ed040aee-kube-api-access-zwzcr\") pod \"openstack-operator-index-ss25w\" (UID: \"64b105d9-469b-4db7-9358-60e9ed040aee\") " pod="openstack-operators/openstack-operator-index-ss25w" Dec 10 14:51:02 crc kubenswrapper[4727]: I1210 14:51:02.136764 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwzcr\" (UniqueName: \"kubernetes.io/projected/64b105d9-469b-4db7-9358-60e9ed040aee-kube-api-access-zwzcr\") pod \"openstack-operator-index-ss25w\" (UID: \"64b105d9-469b-4db7-9358-60e9ed040aee\") " pod="openstack-operators/openstack-operator-index-ss25w" Dec 10 14:51:02 crc kubenswrapper[4727]: I1210 14:51:02.199422 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ss25w" Dec 10 14:51:02 crc kubenswrapper[4727]: I1210 14:51:02.953767 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ss25w"] Dec 10 14:51:04 crc kubenswrapper[4727]: I1210 14:51:04.193229 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xcqhz" Dec 10 14:51:04 crc kubenswrapper[4727]: I1210 14:51:04.245097 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-7b7cs" Dec 10 14:51:04 crc kubenswrapper[4727]: I1210 14:51:04.601502 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pfqzr" event={"ID":"7642b755-0985-4aa4-b5aa-e6ef4ffe8a91","Type":"ContainerStarted","Data":"4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63"} Dec 10 14:51:04 crc kubenswrapper[4727]: I1210 14:51:04.601627 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pfqzr" podUID="7642b755-0985-4aa4-b5aa-e6ef4ffe8a91" containerName="registry-server" containerID="cri-o://4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63" gracePeriod=2 Dec 10 14:51:04 crc kubenswrapper[4727]: I1210 14:51:04.603652 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ss25w" event={"ID":"64b105d9-469b-4db7-9358-60e9ed040aee","Type":"ContainerStarted","Data":"60031dc427aa6870a0b086112bdd588f773e3b6cd2ffb0169dff7c65881d4652"} Dec 10 14:51:04 crc kubenswrapper[4727]: I1210 14:51:04.603697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ss25w" event={"ID":"64b105d9-469b-4db7-9358-60e9ed040aee","Type":"ContainerStarted","Data":"f488a69d9865e10205b90c19bff8a3ad31af34c38a3c65d3742d7fb876809655"} Dec 10 14:51:04 crc kubenswrapper[4727]: I1210 14:51:04.645365 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pfqzr" podStartSLOduration=1.758895485 podStartE2EDuration="5.645343141s" podCreationTimestamp="2025-12-10 14:50:59 +0000 UTC" firstStartedPulling="2025-12-10 14:50:59.881986505 +0000 UTC m=+1164.076761047" lastFinishedPulling="2025-12-10 14:51:03.768434151 +0000 UTC m=+1167.963208703" observedRunningTime="2025-12-10 14:51:04.627395987 +0000 UTC m=+1168.822170529" watchObservedRunningTime="2025-12-10 14:51:04.645343141 +0000 UTC m=+1168.840117683" Dec 10 14:51:04 crc kubenswrapper[4727]: I1210 14:51:04.645460 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ss25w" podStartSLOduration=3.5136713139999998 podStartE2EDuration="3.645455933s" podCreationTimestamp="2025-12-10 14:51:01 +0000 UTC" firstStartedPulling="2025-12-10 14:51:03.641344711 +0000 UTC m=+1167.836119253" lastFinishedPulling="2025-12-10 14:51:03.77312933 +0000 UTC m=+1167.967903872" observedRunningTime="2025-12-10 14:51:04.644293984 +0000 UTC m=+1168.839068526" watchObservedRunningTime="2025-12-10 14:51:04.645455933 +0000 UTC m=+1168.840230475" Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.554900 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pfqzr" Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.611314 4727 generic.go:334] "Generic (PLEG): container finished" podID="7642b755-0985-4aa4-b5aa-e6ef4ffe8a91" containerID="4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63" exitCode=0 Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.611355 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pfqzr" event={"ID":"7642b755-0985-4aa4-b5aa-e6ef4ffe8a91","Type":"ContainerDied","Data":"4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63"} Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.611393 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pfqzr" event={"ID":"7642b755-0985-4aa4-b5aa-e6ef4ffe8a91","Type":"ContainerDied","Data":"941855237a5de76a572e65b3b3316bb1cd2f511538f8cadc204a467fe399762a"} Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.611429 4727 scope.go:117] "RemoveContainer" containerID="4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63" Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.611503 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pfqzr" Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.634424 4727 scope.go:117] "RemoveContainer" containerID="4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63" Dec 10 14:51:05 crc kubenswrapper[4727]: E1210 14:51:05.634898 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63\": container with ID starting with 4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63 not found: ID does not exist" containerID="4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63" Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.634961 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63"} err="failed to get container status \"4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63\": rpc error: code = NotFound desc = could not find container \"4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63\": container with ID starting with 4cdcee1b1c1641f4c448f90dbed96dd1e2b388c9ad32ab3a63e608086b7afa63 not found: ID does not exist" Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.674034 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f2kx\" (UniqueName: \"kubernetes.io/projected/7642b755-0985-4aa4-b5aa-e6ef4ffe8a91-kube-api-access-2f2kx\") pod \"7642b755-0985-4aa4-b5aa-e6ef4ffe8a91\" (UID: \"7642b755-0985-4aa4-b5aa-e6ef4ffe8a91\") " Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.680271 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7642b755-0985-4aa4-b5aa-e6ef4ffe8a91-kube-api-access-2f2kx" (OuterVolumeSpecName: "kube-api-access-2f2kx") pod "7642b755-0985-4aa4-b5aa-e6ef4ffe8a91" (UID: "7642b755-0985-4aa4-b5aa-e6ef4ffe8a91"). InnerVolumeSpecName "kube-api-access-2f2kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.775522 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f2kx\" (UniqueName: \"kubernetes.io/projected/7642b755-0985-4aa4-b5aa-e6ef4ffe8a91-kube-api-access-2f2kx\") on node \"crc\" DevicePath \"\"" Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.943540 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pfqzr"] Dec 10 14:51:05 crc kubenswrapper[4727]: I1210 14:51:05.948202 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pfqzr"] Dec 10 14:51:06 crc kubenswrapper[4727]: I1210 14:51:06.573618 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7642b755-0985-4aa4-b5aa-e6ef4ffe8a91" path="/var/lib/kubelet/pods/7642b755-0985-4aa4-b5aa-e6ef4ffe8a91/volumes" Dec 10 14:51:12 crc kubenswrapper[4727]: I1210 14:51:12.199622 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ss25w" Dec 10 14:51:12 crc kubenswrapper[4727]: I1210 14:51:12.200270 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ss25w" Dec 10 14:51:12 crc kubenswrapper[4727]: I1210 14:51:12.243837 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ss25w" Dec 10 14:51:12 crc kubenswrapper[4727]: I1210 14:51:12.683832 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ss25w" Dec 10 14:51:14 crc kubenswrapper[4727]: I1210 14:51:14.904856 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz"] Dec 10 14:51:14 crc kubenswrapper[4727]: E1210 14:51:14.905575 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7642b755-0985-4aa4-b5aa-e6ef4ffe8a91" containerName="registry-server" Dec 10 14:51:14 crc kubenswrapper[4727]: I1210 14:51:14.905596 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7642b755-0985-4aa4-b5aa-e6ef4ffe8a91" containerName="registry-server" Dec 10 14:51:14 crc kubenswrapper[4727]: I1210 14:51:14.905797 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7642b755-0985-4aa4-b5aa-e6ef4ffe8a91" containerName="registry-server" Dec 10 14:51:14 crc kubenswrapper[4727]: I1210 14:51:14.907029 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:14 crc kubenswrapper[4727]: I1210 14:51:14.910405 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f7n2q" Dec 10 14:51:14 crc kubenswrapper[4727]: I1210 14:51:14.918427 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz"] Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.000241 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skxvx\" (UniqueName: \"kubernetes.io/projected/7037a391-6332-4903-9b2e-7910e334ae5d-kube-api-access-skxvx\") pod \"eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.000279 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-util\") pod \"eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.000311 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-bundle\") pod \"eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.100969 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skxvx\" (UniqueName: \"kubernetes.io/projected/7037a391-6332-4903-9b2e-7910e334ae5d-kube-api-access-skxvx\") pod \"eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.101223 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-util\") pod \"eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.101346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-bundle\") pod \"eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.101684 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-util\") pod \"eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.101705 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-bundle\") pod \"eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.120980 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skxvx\" (UniqueName: \"kubernetes.io/projected/7037a391-6332-4903-9b2e-7910e334ae5d-kube-api-access-skxvx\") pod \"eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.225575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.625087 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz"] Dec 10 14:51:15 crc kubenswrapper[4727]: I1210 14:51:15.681566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" event={"ID":"7037a391-6332-4903-9b2e-7910e334ae5d","Type":"ContainerStarted","Data":"801be1fb6a114aca6afd4c58fb191da9c8255255d451371e326d8900e9779195"} Dec 10 14:51:16 crc kubenswrapper[4727]: I1210 14:51:16.689265 4727 generic.go:334] "Generic (PLEG): container finished" podID="7037a391-6332-4903-9b2e-7910e334ae5d" containerID="ccc5adb913b01151d2281fd7eb41f17a7acce2653c4eb5f4e4e1b3aaf4a2e3b2" exitCode=0 Dec 10 14:51:16 crc kubenswrapper[4727]: I1210 14:51:16.689754 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" event={"ID":"7037a391-6332-4903-9b2e-7910e334ae5d","Type":"ContainerDied","Data":"ccc5adb913b01151d2281fd7eb41f17a7acce2653c4eb5f4e4e1b3aaf4a2e3b2"} Dec 10 14:51:17 crc kubenswrapper[4727]: I1210 14:51:17.705492 4727 generic.go:334] "Generic (PLEG): container finished" podID="7037a391-6332-4903-9b2e-7910e334ae5d" containerID="e45c9b1232cf3f37e68c98c52f6c981b6179ee308622963273cd61bcd78d9515" exitCode=0 Dec 10 14:51:17 crc kubenswrapper[4727]: I1210 14:51:17.705545 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" event={"ID":"7037a391-6332-4903-9b2e-7910e334ae5d","Type":"ContainerDied","Data":"e45c9b1232cf3f37e68c98c52f6c981b6179ee308622963273cd61bcd78d9515"} Dec 10 14:51:18 crc kubenswrapper[4727]: I1210 14:51:18.714530 4727 generic.go:334] "Generic (PLEG): container finished" podID="7037a391-6332-4903-9b2e-7910e334ae5d" containerID="a14997df8d61aee1187b949bbf8c1a6433572e4f000dd6697fa92d94011959a0" exitCode=0 Dec 10 14:51:18 crc kubenswrapper[4727]: I1210 14:51:18.714610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" event={"ID":"7037a391-6332-4903-9b2e-7910e334ae5d","Type":"ContainerDied","Data":"a14997df8d61aee1187b949bbf8c1a6433572e4f000dd6697fa92d94011959a0"} Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.029919 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.230010 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skxvx\" (UniqueName: \"kubernetes.io/projected/7037a391-6332-4903-9b2e-7910e334ae5d-kube-api-access-skxvx\") pod \"7037a391-6332-4903-9b2e-7910e334ae5d\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.230403 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-util\") pod \"7037a391-6332-4903-9b2e-7910e334ae5d\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.230431 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-bundle\") pod \"7037a391-6332-4903-9b2e-7910e334ae5d\" (UID: \"7037a391-6332-4903-9b2e-7910e334ae5d\") " Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.231399 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-bundle" (OuterVolumeSpecName: "bundle") pod "7037a391-6332-4903-9b2e-7910e334ae5d" (UID: "7037a391-6332-4903-9b2e-7910e334ae5d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.236852 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7037a391-6332-4903-9b2e-7910e334ae5d-kube-api-access-skxvx" (OuterVolumeSpecName: "kube-api-access-skxvx") pod "7037a391-6332-4903-9b2e-7910e334ae5d" (UID: "7037a391-6332-4903-9b2e-7910e334ae5d"). InnerVolumeSpecName "kube-api-access-skxvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.246853 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-util" (OuterVolumeSpecName: "util") pod "7037a391-6332-4903-9b2e-7910e334ae5d" (UID: "7037a391-6332-4903-9b2e-7910e334ae5d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.331838 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skxvx\" (UniqueName: \"kubernetes.io/projected/7037a391-6332-4903-9b2e-7910e334ae5d-kube-api-access-skxvx\") on node \"crc\" DevicePath \"\"" Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.331876 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-util\") on node \"crc\" DevicePath \"\"" Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.331886 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7037a391-6332-4903-9b2e-7910e334ae5d-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.737625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" event={"ID":"7037a391-6332-4903-9b2e-7910e334ae5d","Type":"ContainerDied","Data":"801be1fb6a114aca6afd4c58fb191da9c8255255d451371e326d8900e9779195"} Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.737677 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="801be1fb6a114aca6afd4c58fb191da9c8255255d451371e326d8900e9779195" Dec 10 14:51:20 crc kubenswrapper[4727]: I1210 14:51:20.738208 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz" Dec 10 14:51:25 crc kubenswrapper[4727]: I1210 14:51:25.010366 4727 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5qz7j container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:51:25 crc kubenswrapper[4727]: I1210 14:51:25.010784 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" podUID="174ca2d4-702a-48fe-83d9-a9bfc1353c78" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 14:51:25 crc kubenswrapper[4727]: I1210 14:51:25.010405 4727 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5qz7j container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:51:25 crc kubenswrapper[4727]: I1210 14:51:25.010922 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qz7j" podUID="174ca2d4-702a-48fe-83d9-a9bfc1353c78" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.349768 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db"] Dec 10 14:51:27 crc kubenswrapper[4727]: E1210 14:51:27.350414 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7037a391-6332-4903-9b2e-7910e334ae5d" containerName="extract" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.350432 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7037a391-6332-4903-9b2e-7910e334ae5d" containerName="extract" Dec 10 14:51:27 crc kubenswrapper[4727]: E1210 14:51:27.350449 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7037a391-6332-4903-9b2e-7910e334ae5d" containerName="pull" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.350457 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7037a391-6332-4903-9b2e-7910e334ae5d" containerName="pull" Dec 10 14:51:27 crc kubenswrapper[4727]: E1210 14:51:27.350468 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7037a391-6332-4903-9b2e-7910e334ae5d" containerName="util" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.350475 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7037a391-6332-4903-9b2e-7910e334ae5d" containerName="util" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.350636 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7037a391-6332-4903-9b2e-7910e334ae5d" containerName="extract" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.351200 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.357206 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-f2dv6" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.423940 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smb7m\" (UniqueName: \"kubernetes.io/projected/b6f32208-e433-43a8-9cc3-1a0b642db859-kube-api-access-smb7m\") pod \"openstack-operator-controller-operator-54566bf895-5d2db\" (UID: \"b6f32208-e433-43a8-9cc3-1a0b642db859\") " pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.435885 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db"] Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.525224 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smb7m\" (UniqueName: \"kubernetes.io/projected/b6f32208-e433-43a8-9cc3-1a0b642db859-kube-api-access-smb7m\") pod \"openstack-operator-controller-operator-54566bf895-5d2db\" (UID: \"b6f32208-e433-43a8-9cc3-1a0b642db859\") " pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.549752 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smb7m\" (UniqueName: \"kubernetes.io/projected/b6f32208-e433-43a8-9cc3-1a0b642db859-kube-api-access-smb7m\") pod \"openstack-operator-controller-operator-54566bf895-5d2db\" (UID: \"b6f32208-e433-43a8-9cc3-1a0b642db859\") " pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" Dec 10 14:51:27 crc kubenswrapper[4727]: I1210 14:51:27.669588 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" Dec 10 14:51:28 crc kubenswrapper[4727]: I1210 14:51:28.389014 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db"] Dec 10 14:51:28 crc kubenswrapper[4727]: I1210 14:51:28.792645 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" event={"ID":"b6f32208-e433-43a8-9cc3-1a0b642db859","Type":"ContainerStarted","Data":"83113929d94a469398ba344224a3a0a9328b5976913da3f6d0e840405ec623b7"} Dec 10 14:51:34 crc kubenswrapper[4727]: I1210 14:51:34.838310 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" event={"ID":"b6f32208-e433-43a8-9cc3-1a0b642db859","Type":"ContainerStarted","Data":"1fd3ec1a14d180051735e2f8f2e0be377b9540e67a05d6d74b1f27c270418b3d"} Dec 10 14:51:34 crc kubenswrapper[4727]: I1210 14:51:34.838722 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" Dec 10 14:51:34 crc kubenswrapper[4727]: I1210 14:51:34.871437 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" podStartSLOduration=2.014301343 podStartE2EDuration="7.871397984s" podCreationTimestamp="2025-12-10 14:51:27 +0000 UTC" firstStartedPulling="2025-12-10 14:51:28.408427399 +0000 UTC m=+1192.603201941" lastFinishedPulling="2025-12-10 14:51:34.26552404 +0000 UTC m=+1198.460298582" observedRunningTime="2025-12-10 14:51:34.867802473 +0000 UTC m=+1199.062577035" watchObservedRunningTime="2025-12-10 14:51:34.871397984 +0000 UTC m=+1199.066172536" Dec 10 14:51:47 crc kubenswrapper[4727]: I1210 14:51:47.673463 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-54566bf895-5d2db" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.081767 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.084371 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.086736 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pnc72" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.094873 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.097041 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhrm\" (UniqueName: \"kubernetes.io/projected/14d03018-c372-4ede-bb5d-47efd53f4d51-kube-api-access-nkhrm\") pod \"barbican-operator-controller-manager-7d9dfd778-dwczj\" (UID: \"14d03018-c372-4ede-bb5d-47efd53f4d51\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.097511 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.100201 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ps64p" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.103859 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.109813 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.111197 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.115787 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-67c82" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.125759 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.127285 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.131289 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2sz7d" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.149088 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.162392 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.171614 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.190974 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.192192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.198064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jf8j\" (UniqueName: \"kubernetes.io/projected/862ae668-98ff-4531-9eca-6309953e1333-kube-api-access-6jf8j\") pod \"glance-operator-controller-manager-5697bb5779-tmlwp\" (UID: \"862ae668-98ff-4531-9eca-6309953e1333\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.198289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhrm\" (UniqueName: \"kubernetes.io/projected/14d03018-c372-4ede-bb5d-47efd53f4d51-kube-api-access-nkhrm\") pod \"barbican-operator-controller-manager-7d9dfd778-dwczj\" (UID: \"14d03018-c372-4ede-bb5d-47efd53f4d51\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.198399 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7gx\" (UniqueName: \"kubernetes.io/projected/472840c5-9b95-4303-911c-7b27232236ea-kube-api-access-th7gx\") pod \"designate-operator-controller-manager-697fb699cf-drdt4\" (UID: \"472840c5-9b95-4303-911c-7b27232236ea\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.198490 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7twcr\" (UniqueName: \"kubernetes.io/projected/42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9-kube-api-access-7twcr\") pod \"cinder-operator-controller-manager-6c677c69b-tvzpk\" (UID: \"42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.198592 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmz4\" (UniqueName: \"kubernetes.io/projected/4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5-kube-api-access-hsmz4\") pod \"heat-operator-controller-manager-5f64f6f8bb-q9gqt\" (UID: \"4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.204165 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-h8wsg" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.218245 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.226065 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.235024 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhrm\" (UniqueName: \"kubernetes.io/projected/14d03018-c372-4ede-bb5d-47efd53f4d51-kube-api-access-nkhrm\") pod \"barbican-operator-controller-manager-7d9dfd778-dwczj\" (UID: \"14d03018-c372-4ede-bb5d-47efd53f4d51\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.240244 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.247491 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.249474 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-m4dsz" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.273900 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.275019 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.285511 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-klj8f" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.285713 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.293793 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.294939 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.299430 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dlrpz" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.300382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmz4\" (UniqueName: \"kubernetes.io/projected/4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5-kube-api-access-hsmz4\") pod \"heat-operator-controller-manager-5f64f6f8bb-q9gqt\" (UID: \"4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.300495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jf8j\" (UniqueName: \"kubernetes.io/projected/862ae668-98ff-4531-9eca-6309953e1333-kube-api-access-6jf8j\") pod \"glance-operator-controller-manager-5697bb5779-tmlwp\" (UID: \"862ae668-98ff-4531-9eca-6309953e1333\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.300625 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th7gx\" (UniqueName: \"kubernetes.io/projected/472840c5-9b95-4303-911c-7b27232236ea-kube-api-access-th7gx\") pod \"designate-operator-controller-manager-697fb699cf-drdt4\" (UID: \"472840c5-9b95-4303-911c-7b27232236ea\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.300751 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7twcr\" (UniqueName: \"kubernetes.io/projected/42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9-kube-api-access-7twcr\") pod \"cinder-operator-controller-manager-6c677c69b-tvzpk\" (UID: \"42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.305979 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.340345 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7twcr\" (UniqueName: \"kubernetes.io/projected/42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9-kube-api-access-7twcr\") pod \"cinder-operator-controller-manager-6c677c69b-tvzpk\" (UID: \"42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.341218 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th7gx\" (UniqueName: \"kubernetes.io/projected/472840c5-9b95-4303-911c-7b27232236ea-kube-api-access-th7gx\") pod \"designate-operator-controller-manager-697fb699cf-drdt4\" (UID: \"472840c5-9b95-4303-911c-7b27232236ea\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.344596 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmz4\" (UniqueName: \"kubernetes.io/projected/4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5-kube-api-access-hsmz4\") pod \"heat-operator-controller-manager-5f64f6f8bb-q9gqt\" (UID: \"4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.344933 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jf8j\" (UniqueName: \"kubernetes.io/projected/862ae668-98ff-4531-9eca-6309953e1333-kube-api-access-6jf8j\") pod \"glance-operator-controller-manager-5697bb5779-tmlwp\" (UID: \"862ae668-98ff-4531-9eca-6309953e1333\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.346281 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.376523 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.377848 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.381175 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.388341 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rc7qp" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.403104 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.403147 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ltl9\" (UniqueName: \"kubernetes.io/projected/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-kube-api-access-5ltl9\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.403174 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8822b\" (UniqueName: \"kubernetes.io/projected/ac673936-2054-4418-bb79-5aad0e79b264-kube-api-access-8822b\") pod \"horizon-operator-controller-manager-68c6d99b8f-7vs7p\" (UID: \"ac673936-2054-4418-bb79-5aad0e79b264\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.403201 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtqx\" (UniqueName: \"kubernetes.io/projected/be805dee-d0fa-4358-b461-bfd98c87bcaa-kube-api-access-wrtqx\") pod \"ironic-operator-controller-manager-967d97867-nbjpv\" (UID: \"be805dee-d0fa-4358-b461-bfd98c87bcaa\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.409243 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.410721 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.410966 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.419404 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-s9fgq" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.436214 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.438794 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.446178 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.452695 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.454595 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.462363 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-flk57" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.463252 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.500844 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.505726 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.505780 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ltl9\" (UniqueName: \"kubernetes.io/projected/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-kube-api-access-5ltl9\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.505812 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8822b\" (UniqueName: \"kubernetes.io/projected/ac673936-2054-4418-bb79-5aad0e79b264-kube-api-access-8822b\") pod \"horizon-operator-controller-manager-68c6d99b8f-7vs7p\" (UID: \"ac673936-2054-4418-bb79-5aad0e79b264\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.505834 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrtqx\" (UniqueName: \"kubernetes.io/projected/be805dee-d0fa-4358-b461-bfd98c87bcaa-kube-api-access-wrtqx\") pod \"ironic-operator-controller-manager-967d97867-nbjpv\" (UID: \"be805dee-d0fa-4358-b461-bfd98c87bcaa\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.505872 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp9df\" (UniqueName: \"kubernetes.io/projected/db7bd96a-8494-4041-9277-93705f23849d-kube-api-access-jp9df\") pod \"keystone-operator-controller-manager-7765d96ddf-szdjh\" (UID: \"db7bd96a-8494-4041-9277-93705f23849d\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" Dec 10 14:52:07 crc kubenswrapper[4727]: E1210 14:52:07.506110 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:07 crc kubenswrapper[4727]: E1210 14:52:07.506195 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert podName:ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e nodeName:}" failed. No retries permitted until 2025-12-10 14:52:08.00615322 +0000 UTC m=+1232.200927762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert") pod "infra-operator-controller-manager-78d48bff9d-2jdhx" (UID: "ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.516605 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.521112 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.555889 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ltl9\" (UniqueName: \"kubernetes.io/projected/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-kube-api-access-5ltl9\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.620482 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8822b\" (UniqueName: \"kubernetes.io/projected/ac673936-2054-4418-bb79-5aad0e79b264-kube-api-access-8822b\") pod \"horizon-operator-controller-manager-68c6d99b8f-7vs7p\" (UID: \"ac673936-2054-4418-bb79-5aad0e79b264\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.645166 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.645839 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp9df\" (UniqueName: \"kubernetes.io/projected/db7bd96a-8494-4041-9277-93705f23849d-kube-api-access-jp9df\") pod \"keystone-operator-controller-manager-7765d96ddf-szdjh\" (UID: \"db7bd96a-8494-4041-9277-93705f23849d\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.649839 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrtqx\" (UniqueName: \"kubernetes.io/projected/be805dee-d0fa-4358-b461-bfd98c87bcaa-kube-api-access-wrtqx\") pod \"ironic-operator-controller-manager-967d97867-nbjpv\" (UID: \"be805dee-d0fa-4358-b461-bfd98c87bcaa\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.650335 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-5bzbt" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.651210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxnx\" (UniqueName: \"kubernetes.io/projected/da47343a-d98d-4f70-bc94-ae74257914e2-kube-api-access-wvxnx\") pod \"manila-operator-controller-manager-5b5fd79c9c-5rmxn\" (UID: \"da47343a-d98d-4f70-bc94-ae74257914e2\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.651358 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwt4\" (UniqueName: \"kubernetes.io/projected/713c2d47-7281-46d4-bbcd-16fba5161b5a-kube-api-access-5qwt4\") pod \"mariadb-operator-controller-manager-79c8c4686c-5s987\" (UID: \"713c2d47-7281-46d4-bbcd-16fba5161b5a\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.724752 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.724840 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.752683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp9df\" (UniqueName: \"kubernetes.io/projected/db7bd96a-8494-4041-9277-93705f23849d-kube-api-access-jp9df\") pod \"keystone-operator-controller-manager-7765d96ddf-szdjh\" (UID: \"db7bd96a-8494-4041-9277-93705f23849d\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.759120 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxnx\" (UniqueName: \"kubernetes.io/projected/da47343a-d98d-4f70-bc94-ae74257914e2-kube-api-access-wvxnx\") pod \"manila-operator-controller-manager-5b5fd79c9c-5rmxn\" (UID: \"da47343a-d98d-4f70-bc94-ae74257914e2\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.759527 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qwt4\" (UniqueName: \"kubernetes.io/projected/713c2d47-7281-46d4-bbcd-16fba5161b5a-kube-api-access-5qwt4\") pod \"mariadb-operator-controller-manager-79c8c4686c-5s987\" (UID: \"713c2d47-7281-46d4-bbcd-16fba5161b5a\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.759587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdsdt\" (UniqueName: \"kubernetes.io/projected/34a91f52-98f9-4ada-b0d1-54bba42c1035-kube-api-access-xdsdt\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-hqsbc\" (UID: \"34a91f52-98f9-4ada-b0d1-54bba42c1035\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.778219 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.839624 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.846827 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.851045 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7jvdp" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.862385 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.863154 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdsdt\" (UniqueName: \"kubernetes.io/projected/34a91f52-98f9-4ada-b0d1-54bba42c1035-kube-api-access-xdsdt\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-hqsbc\" (UID: \"34a91f52-98f9-4ada-b0d1-54bba42c1035\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.887278 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.898722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdsdt\" (UniqueName: \"kubernetes.io/projected/34a91f52-98f9-4ada-b0d1-54bba42c1035-kube-api-access-xdsdt\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-hqsbc\" (UID: \"34a91f52-98f9-4ada-b0d1-54bba42c1035\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.900588 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxnx\" (UniqueName: \"kubernetes.io/projected/da47343a-d98d-4f70-bc94-ae74257914e2-kube-api-access-wvxnx\") pod \"manila-operator-controller-manager-5b5fd79c9c-5rmxn\" (UID: \"da47343a-d98d-4f70-bc94-ae74257914e2\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.903012 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-m2c89"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.904733 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.908094 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ktlwp" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.922610 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qwt4\" (UniqueName: \"kubernetes.io/projected/713c2d47-7281-46d4-bbcd-16fba5161b5a-kube-api-access-5qwt4\") pod \"mariadb-operator-controller-manager-79c8c4686c-5s987\" (UID: \"713c2d47-7281-46d4-bbcd-16fba5161b5a\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.928379 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-m2c89"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.934971 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.936381 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.937306 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.945610 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.952148 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-s4rr2" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.955365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.968404 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2lb4d" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.969208 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.969472 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4"] Dec 10 14:52:07 crc kubenswrapper[4727]: I1210 14:52:07.975678 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfb7x\" (UniqueName: \"kubernetes.io/projected/9cf31fa3-bbae-4fcd-9d8a-c11a6b291642-kube-api-access-bfb7x\") pod \"nova-operator-controller-manager-697bc559fc-9zpx4\" (UID: \"9cf31fa3-bbae-4fcd-9d8a-c11a6b291642\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.000015 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-b2njb"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.002023 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.008750 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-b2njb"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.029666 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.030268 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.032636 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.045643 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.047690 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.048367 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.049324 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zxr48" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.049548 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.049842 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kcjhl" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.052195 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.080568 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.082351 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.083400 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.087969 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.089305 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.091039 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.097338 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srf9q\" (UniqueName: \"kubernetes.io/projected/adc7591e-00e9-4ed5-9d6b-729a283cf25d-kube-api-access-srf9q\") pod \"test-operator-controller-manager-5854674fcc-dbvwg\" (UID: \"adc7591e-00e9-4ed5-9d6b-729a283cf25d\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.097574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjzms\" (UniqueName: \"kubernetes.io/projected/65cd268a-1c59-4115-8732-085b07c41edf-kube-api-access-cjzms\") pod \"swift-operator-controller-manager-9d58d64bc-fzvpw\" (UID: \"65cd268a-1c59-4115-8732-085b07c41edf\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.097740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bxr\" (UniqueName: \"kubernetes.io/projected/4bb1a719-bf76-4753-9bda-e5b2a71b2f96-kube-api-access-v4bxr\") pod \"telemetry-operator-controller-manager-5bbb8fffcc-sml6p\" (UID: \"4bb1a719-bf76-4753-9bda-e5b2a71b2f96\") " pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.097832 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv89s\" (UniqueName: \"kubernetes.io/projected/a1fce3d2-8cd4-4c57-95a4-57f04ff403a7-kube-api-access-xv89s\") pod \"placement-operator-controller-manager-78f8948974-b2njb\" (UID: \"a1fce3d2-8cd4-4c57-95a4-57f04ff403a7\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.097987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.098114 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqf7k\" (UniqueName: \"kubernetes.io/projected/492a4765-2161-48a4-a37b-2a11c919ebcf-kube-api-access-tqf7k\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.098234 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.098381 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt94m\" (UniqueName: \"kubernetes.io/projected/eae9110d-9b14-4360-9e66-60bf84efae12-kube-api-access-xt94m\") pod \"watcher-operator-controller-manager-75944c9b7-qllqx\" (UID: \"eae9110d-9b14-4360-9e66-60bf84efae12\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.098487 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfb7x\" (UniqueName: \"kubernetes.io/projected/9cf31fa3-bbae-4fcd-9d8a-c11a6b291642-kube-api-access-bfb7x\") pod \"nova-operator-controller-manager-697bc559fc-9zpx4\" (UID: \"9cf31fa3-bbae-4fcd-9d8a-c11a6b291642\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.098579 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wwx\" (UniqueName: \"kubernetes.io/projected/365ce9e6-678d-4036-9971-3c82e553fa22-kube-api-access-96wwx\") pod \"octavia-operator-controller-manager-998648c74-m2c89\" (UID: \"365ce9e6-678d-4036-9971-3c82e553fa22\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.098731 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szdv2\" (UniqueName: \"kubernetes.io/projected/f2964ecf-16a8-4906-a0ee-b2823ec9e9fd-kube-api-access-szdv2\") pod \"ovn-operator-controller-manager-b6456fdb6-jx2r4\" (UID: \"f2964ecf-16a8-4906-a0ee-b2823ec9e9fd\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.099006 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.099134 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert podName:ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e nodeName:}" failed. No retries permitted until 2025-12-10 14:52:09.099112008 +0000 UTC m=+1233.293886550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert") pod "infra-operator-controller-manager-78d48bff9d-2jdhx" (UID: "ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.105753 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.114016 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.115586 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8svtp" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.130733 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-cbgfj" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.131415 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-5xj2g" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.153713 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.201123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bxr\" (UniqueName: \"kubernetes.io/projected/4bb1a719-bf76-4753-9bda-e5b2a71b2f96-kube-api-access-v4bxr\") pod \"telemetry-operator-controller-manager-5bbb8fffcc-sml6p\" (UID: \"4bb1a719-bf76-4753-9bda-e5b2a71b2f96\") " pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.201234 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv89s\" (UniqueName: \"kubernetes.io/projected/a1fce3d2-8cd4-4c57-95a4-57f04ff403a7-kube-api-access-xv89s\") pod \"placement-operator-controller-manager-78f8948974-b2njb\" (UID: \"a1fce3d2-8cd4-4c57-95a4-57f04ff403a7\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.201343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqf7k\" (UniqueName: \"kubernetes.io/projected/492a4765-2161-48a4-a37b-2a11c919ebcf-kube-api-access-tqf7k\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.201413 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.201503 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt94m\" (UniqueName: \"kubernetes.io/projected/eae9110d-9b14-4360-9e66-60bf84efae12-kube-api-access-xt94m\") pod \"watcher-operator-controller-manager-75944c9b7-qllqx\" (UID: \"eae9110d-9b14-4360-9e66-60bf84efae12\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.201671 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96wwx\" (UniqueName: \"kubernetes.io/projected/365ce9e6-678d-4036-9971-3c82e553fa22-kube-api-access-96wwx\") pod \"octavia-operator-controller-manager-998648c74-m2c89\" (UID: \"365ce9e6-678d-4036-9971-3c82e553fa22\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.201756 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szdv2\" (UniqueName: \"kubernetes.io/projected/f2964ecf-16a8-4906-a0ee-b2823ec9e9fd-kube-api-access-szdv2\") pod \"ovn-operator-controller-manager-b6456fdb6-jx2r4\" (UID: \"f2964ecf-16a8-4906-a0ee-b2823ec9e9fd\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.201952 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srf9q\" (UniqueName: \"kubernetes.io/projected/adc7591e-00e9-4ed5-9d6b-729a283cf25d-kube-api-access-srf9q\") pod \"test-operator-controller-manager-5854674fcc-dbvwg\" (UID: \"adc7591e-00e9-4ed5-9d6b-729a283cf25d\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.202141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjzms\" (UniqueName: \"kubernetes.io/projected/65cd268a-1c59-4115-8732-085b07c41edf-kube-api-access-cjzms\") pod \"swift-operator-controller-manager-9d58d64bc-fzvpw\" (UID: \"65cd268a-1c59-4115-8732-085b07c41edf\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.214642 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.214721 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert podName:492a4765-2161-48a4-a37b-2a11c919ebcf nodeName:}" failed. No retries permitted until 2025-12-10 14:52:08.714699687 +0000 UTC m=+1232.909474229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh8tfl" (UID: "492a4765-2161-48a4-a37b-2a11c919ebcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.252360 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfb7x\" (UniqueName: \"kubernetes.io/projected/9cf31fa3-bbae-4fcd-9d8a-c11a6b291642-kube-api-access-bfb7x\") pod \"nova-operator-controller-manager-697bc559fc-9zpx4\" (UID: \"9cf31fa3-bbae-4fcd-9d8a-c11a6b291642\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.275469 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bxr\" (UniqueName: \"kubernetes.io/projected/4bb1a719-bf76-4753-9bda-e5b2a71b2f96-kube-api-access-v4bxr\") pod \"telemetry-operator-controller-manager-5bbb8fffcc-sml6p\" (UID: \"4bb1a719-bf76-4753-9bda-e5b2a71b2f96\") " pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.282335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srf9q\" (UniqueName: \"kubernetes.io/projected/adc7591e-00e9-4ed5-9d6b-729a283cf25d-kube-api-access-srf9q\") pod \"test-operator-controller-manager-5854674fcc-dbvwg\" (UID: \"adc7591e-00e9-4ed5-9d6b-729a283cf25d\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.289577 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szdv2\" (UniqueName: \"kubernetes.io/projected/f2964ecf-16a8-4906-a0ee-b2823ec9e9fd-kube-api-access-szdv2\") pod \"ovn-operator-controller-manager-b6456fdb6-jx2r4\" (UID: \"f2964ecf-16a8-4906-a0ee-b2823ec9e9fd\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.298282 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.299623 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96wwx\" (UniqueName: \"kubernetes.io/projected/365ce9e6-678d-4036-9971-3c82e553fa22-kube-api-access-96wwx\") pod \"octavia-operator-controller-manager-998648c74-m2c89\" (UID: \"365ce9e6-678d-4036-9971-3c82e553fa22\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.302386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt94m\" (UniqueName: \"kubernetes.io/projected/eae9110d-9b14-4360-9e66-60bf84efae12-kube-api-access-xt94m\") pod \"watcher-operator-controller-manager-75944c9b7-qllqx\" (UID: \"eae9110d-9b14-4360-9e66-60bf84efae12\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.303070 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqf7k\" (UniqueName: \"kubernetes.io/projected/492a4765-2161-48a4-a37b-2a11c919ebcf-kube-api-access-tqf7k\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.312291 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv89s\" (UniqueName: \"kubernetes.io/projected/a1fce3d2-8cd4-4c57-95a4-57f04ff403a7-kube-api-access-xv89s\") pod \"placement-operator-controller-manager-78f8948974-b2njb\" (UID: \"a1fce3d2-8cd4-4c57-95a4-57f04ff403a7\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.328824 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.336097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.350622 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.351133 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.351326 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-79kwf" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.420989 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjzms\" (UniqueName: \"kubernetes.io/projected/65cd268a-1c59-4115-8732-085b07c41edf-kube-api-access-cjzms\") pod \"swift-operator-controller-manager-9d58d64bc-fzvpw\" (UID: \"65cd268a-1c59-4115-8732-085b07c41edf\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.423847 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6hs5\" (UniqueName: \"kubernetes.io/projected/1149bbe0-dcd9-430b-b37a-fb145387df5f-kube-api-access-c6hs5\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.423954 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.424052 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.566340 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.566897 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.567538 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.571453 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.571561 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.572864 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6hs5\" (UniqueName: \"kubernetes.io/projected/1149bbe0-dcd9-430b-b37a-fb145387df5f-kube-api-access-c6hs5\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.573780 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w"] Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.573977 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.574045 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:09.074021343 +0000 UTC m=+1233.268795885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "webhook-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.576356 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.576444 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:09.076418624 +0000 UTC m=+1233.271193166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "metrics-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.581030 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.590853 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.612890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.646306 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6hs5\" (UniqueName: \"kubernetes.io/projected/1149bbe0-dcd9-430b-b37a-fb145387df5f-kube-api-access-c6hs5\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.733946 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.735703 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: E1210 14:52:08.735813 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert podName:492a4765-2161-48a4-a37b-2a11c919ebcf nodeName:}" failed. No retries permitted until 2025-12-10 14:52:09.735772729 +0000 UTC m=+1233.930547261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh8tfl" (UID: "492a4765-2161-48a4-a37b-2a11c919ebcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.824285 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.833662 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl"] Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.835183 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.839139 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-flsv8" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.950240 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6w6s\" (UniqueName: \"kubernetes.io/projected/26bb64b8-f74e-41f5-9bca-ed451468d0ab-kube-api-access-v6w6s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2mccl\" (UID: \"26bb64b8-f74e-41f5-9bca-ed451468d0ab\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" Dec 10 14:52:08 crc kubenswrapper[4727]: I1210 14:52:08.971241 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl"] Dec 10 14:52:09 crc kubenswrapper[4727]: I1210 14:52:09.012370 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj"] Dec 10 14:52:09 crc kubenswrapper[4727]: I1210 14:52:09.058458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6w6s\" (UniqueName: \"kubernetes.io/projected/26bb64b8-f74e-41f5-9bca-ed451468d0ab-kube-api-access-v6w6s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2mccl\" (UID: \"26bb64b8-f74e-41f5-9bca-ed451468d0ab\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" Dec 10 14:52:09 crc kubenswrapper[4727]: I1210 14:52:09.099516 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6w6s\" (UniqueName: \"kubernetes.io/projected/26bb64b8-f74e-41f5-9bca-ed451468d0ab-kube-api-access-v6w6s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2mccl\" (UID: \"26bb64b8-f74e-41f5-9bca-ed451468d0ab\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" Dec 10 14:52:09 crc kubenswrapper[4727]: I1210 14:52:09.159303 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:09 crc kubenswrapper[4727]: I1210 14:52:09.159721 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:09 crc kubenswrapper[4727]: I1210 14:52:09.159786 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:09 crc kubenswrapper[4727]: E1210 14:52:09.159538 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:52:09 crc kubenswrapper[4727]: E1210 14:52:09.160055 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:10.160034775 +0000 UTC m=+1234.354809317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "webhook-server-cert" not found Dec 10 14:52:09 crc kubenswrapper[4727]: E1210 14:52:09.160543 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:52:09 crc kubenswrapper[4727]: E1210 14:52:09.160577 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:10.160567419 +0000 UTC m=+1234.355341961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "metrics-server-cert" not found Dec 10 14:52:09 crc kubenswrapper[4727]: E1210 14:52:09.159985 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:09 crc kubenswrapper[4727]: E1210 14:52:09.160611 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert podName:ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e nodeName:}" failed. No retries permitted until 2025-12-10 14:52:11.16060254 +0000 UTC m=+1235.355377082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert") pod "infra-operator-controller-manager-78d48bff9d-2jdhx" (UID: "ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:09 crc kubenswrapper[4727]: I1210 14:52:09.295990 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" Dec 10 14:52:09 crc kubenswrapper[4727]: I1210 14:52:09.485330 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" event={"ID":"14d03018-c372-4ede-bb5d-47efd53f4d51","Type":"ContainerStarted","Data":"888ea0808f829dd4cb069969c54e683afb3a54c5d054c7fe39422e7928630077"} Dec 10 14:52:09 crc kubenswrapper[4727]: I1210 14:52:09.797195 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:09 crc kubenswrapper[4727]: E1210 14:52:09.797894 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:09 crc kubenswrapper[4727]: E1210 14:52:09.797983 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert podName:492a4765-2161-48a4-a37b-2a11c919ebcf nodeName:}" failed. No retries permitted until 2025-12-10 14:52:11.797962758 +0000 UTC m=+1235.992737300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh8tfl" (UID: "492a4765-2161-48a4-a37b-2a11c919ebcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:10 crc kubenswrapper[4727]: I1210 14:52:10.185448 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:10 crc kubenswrapper[4727]: I1210 14:52:10.185599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:10 crc kubenswrapper[4727]: E1210 14:52:10.185968 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:52:10 crc kubenswrapper[4727]: E1210 14:52:10.186213 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:12.186100982 +0000 UTC m=+1236.380875514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "webhook-server-cert" not found Dec 10 14:52:10 crc kubenswrapper[4727]: E1210 14:52:10.186841 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:52:10 crc kubenswrapper[4727]: E1210 14:52:10.186953 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:12.186897922 +0000 UTC m=+1236.381672464 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "metrics-server-cert" not found Dec 10 14:52:11 crc kubenswrapper[4727]: I1210 14:52:11.619371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:11 crc kubenswrapper[4727]: E1210 14:52:11.630926 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:11 crc kubenswrapper[4727]: E1210 14:52:11.631205 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert podName:ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e nodeName:}" failed. No retries permitted until 2025-12-10 14:52:15.6310798 +0000 UTC m=+1239.825854342 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert") pod "infra-operator-controller-manager-78d48bff9d-2jdhx" (UID: "ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:12 crc kubenswrapper[4727]: I1210 14:52:12.329199 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:12 crc kubenswrapper[4727]: I1210 14:52:12.329295 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:12 crc kubenswrapper[4727]: I1210 14:52:12.329346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:12 crc kubenswrapper[4727]: E1210 14:52:12.329474 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:52:12 crc kubenswrapper[4727]: E1210 14:52:12.329525 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:16.329511632 +0000 UTC m=+1240.524286174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "webhook-server-cert" not found Dec 10 14:52:12 crc kubenswrapper[4727]: E1210 14:52:12.329549 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:52:12 crc kubenswrapper[4727]: E1210 14:52:12.329594 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:12 crc kubenswrapper[4727]: E1210 14:52:12.329624 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:16.329603244 +0000 UTC m=+1240.524377856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "metrics-server-cert" not found Dec 10 14:52:12 crc kubenswrapper[4727]: E1210 14:52:12.329666 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert podName:492a4765-2161-48a4-a37b-2a11c919ebcf nodeName:}" failed. No retries permitted until 2025-12-10 14:52:16.329644965 +0000 UTC m=+1240.524419507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh8tfl" (UID: "492a4765-2161-48a4-a37b-2a11c919ebcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:12 crc kubenswrapper[4727]: I1210 14:52:12.968180 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv"] Dec 10 14:52:12 crc kubenswrapper[4727]: I1210 14:52:12.999702 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.054572 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.096603 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.119195 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.132229 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.143011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.153633 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.166360 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.181768 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.296471 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.303733 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl"] Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.307668 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srf9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-dbvwg_openstack-operators(adc7591e-00e9-4ed5-9d6b-729a283cf25d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.310867 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srf9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-dbvwg_openstack-operators(adc7591e-00e9-4ed5-9d6b-729a283cf25d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.312406 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" podUID="adc7591e-00e9-4ed5-9d6b-729a283cf25d" Dec 10 14:52:13 crc kubenswrapper[4727]: W1210 14:52:13.331566 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bb1a719_bf76_4753_9bda_e5b2a71b2f96.slice/crio-8306f935a24c0544a351ba3e7b4a9b062d961c39f21fd0ae83c79a24837e9f8d WatchSource:0}: Error finding container 8306f935a24c0544a351ba3e7b4a9b062d961c39f21fd0ae83c79a24837e9f8d: Status 404 returned error can't find the container with id 8306f935a24c0544a351ba3e7b4a9b062d961c39f21fd0ae83c79a24837e9f8d Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.334247 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4bxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5bbb8fffcc-sml6p_openstack-operators(4bb1a719-bf76-4753-9bda-e5b2a71b2f96): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.334326 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-m2c89"] Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.338476 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4bxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5bbb8fffcc-sml6p_openstack-operators(4bb1a719-bf76-4753-9bda-e5b2a71b2f96): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: W1210 14:52:13.339054 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod365ce9e6_678d_4036_9971_3c82e553fa22.slice/crio-45293a052e36f16fba86ea37d880001843c02150c678ae0f02a65e1460462545 WatchSource:0}: Error finding container 45293a052e36f16fba86ea37d880001843c02150c678ae0f02a65e1460462545: Status 404 returned error can't find the container with id 45293a052e36f16fba86ea37d880001843c02150c678ae0f02a65e1460462545 Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.340018 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" podUID="4bb1a719-bf76-4753-9bda-e5b2a71b2f96" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.344632 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96wwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-m2c89_openstack-operators(365ce9e6-678d-4036-9971-3c82e553fa22): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.346984 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96wwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-m2c89_openstack-operators(365ce9e6-678d-4036-9971-3c82e553fa22): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.349401 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" podUID="365ce9e6-678d-4036-9971-3c82e553fa22" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.352358 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.378515 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-b2njb"] Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.381135 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv89s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-b2njb_openstack-operators(a1fce3d2-8cd4-4c57-95a4-57f04ff403a7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: W1210 14:52:13.388755 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cf31fa3_bbae_4fcd_9d8a_c11a6b291642.slice/crio-85e5dfbeeaf0644127e1e3bacc2f3155166d72596fc3491b42b2f4a077fbd1a8 WatchSource:0}: Error finding container 85e5dfbeeaf0644127e1e3bacc2f3155166d72596fc3491b42b2f4a077fbd1a8: Status 404 returned error can't find the container with id 85e5dfbeeaf0644127e1e3bacc2f3155166d72596fc3491b42b2f4a077fbd1a8 Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.392954 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4"] Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.411021 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc"] Dec 10 14:52:13 crc kubenswrapper[4727]: W1210 14:52:13.421366 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34a91f52_98f9_4ada_b0d1_54bba42c1035.slice/crio-f347d2e6072b2a67b29d383b6e1ef48a2e0e29d5d900e664ac46ff8bbbacde8a WatchSource:0}: Error finding container f347d2e6072b2a67b29d383b6e1ef48a2e0e29d5d900e664ac46ff8bbbacde8a: Status 404 returned error can't find the container with id f347d2e6072b2a67b29d383b6e1ef48a2e0e29d5d900e664ac46ff8bbbacde8a Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.424244 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx"] Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.425379 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdsdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-hqsbc_openstack-operators(34a91f52-98f9-4ada-b0d1-54bba42c1035): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.425481 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfb7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-9zpx4_openstack-operators(9cf31fa3-bbae-4fcd-9d8a-c11a6b291642): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.432545 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdsdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-hqsbc_openstack-operators(34a91f52-98f9-4ada-b0d1-54bba42c1035): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.432699 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfb7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-9zpx4_openstack-operators(9cf31fa3-bbae-4fcd-9d8a-c11a6b291642): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.434015 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" podUID="9cf31fa3-bbae-4fcd-9d8a-c11a6b291642" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.434111 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" podUID="34a91f52-98f9-4ada-b0d1-54bba42c1035" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.435616 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xt94m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-qllqx_openstack-operators(eae9110d-9b14-4360-9e66-60bf84efae12): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.436155 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szdv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-jx2r4_openstack-operators(f2964ecf-16a8-4906-a0ee-b2823ec9e9fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.436753 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4"] Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.437490 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xt94m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-qllqx_openstack-operators(eae9110d-9b14-4360-9e66-60bf84efae12): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.494562 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" podUID="eae9110d-9b14-4360-9e66-60bf84efae12" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.494648 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szdv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-jx2r4_openstack-operators(f2964ecf-16a8-4906-a0ee-b2823ec9e9fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.495969 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" podUID="f2964ecf-16a8-4906-a0ee-b2823ec9e9fd" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.687512 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" event={"ID":"4bb1a719-bf76-4753-9bda-e5b2a71b2f96","Type":"ContainerStarted","Data":"8306f935a24c0544a351ba3e7b4a9b062d961c39f21fd0ae83c79a24837e9f8d"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.692988 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" event={"ID":"4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5","Type":"ContainerStarted","Data":"8448a20cf2e8378b768bb93b46c8964f773455e323806c9de8584b71be44d8a9"} Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.693169 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" podUID="4bb1a719-bf76-4753-9bda-e5b2a71b2f96" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.694345 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" event={"ID":"da47343a-d98d-4f70-bc94-ae74257914e2","Type":"ContainerStarted","Data":"dd38e937da0b9469c3901f5e23ff5136471f835c7f222561684a80f2f100ebbc"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.696567 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" event={"ID":"713c2d47-7281-46d4-bbcd-16fba5161b5a","Type":"ContainerStarted","Data":"f7b0f68df4b8475c8d7b7a003dbe4d5328266574b2c25cd80a072250e54c8de1"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.704145 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" event={"ID":"365ce9e6-678d-4036-9971-3c82e553fa22","Type":"ContainerStarted","Data":"45293a052e36f16fba86ea37d880001843c02150c678ae0f02a65e1460462545"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.715169 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" event={"ID":"42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9","Type":"ContainerStarted","Data":"c004d8678496ecac66a238c236453e1e77fac541dda94a8ebedf028f452f2661"} Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.715218 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" podUID="365ce9e6-678d-4036-9971-3c82e553fa22" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.724738 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" event={"ID":"a1fce3d2-8cd4-4c57-95a4-57f04ff403a7","Type":"ContainerStarted","Data":"667bae6b054de3a40b227ba1a336d9ea9f9f3ab93660d133fd895c1a70739f12"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.733445 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" event={"ID":"26bb64b8-f74e-41f5-9bca-ed451468d0ab","Type":"ContainerStarted","Data":"c43a1576c3d758fb251bbab06e63d96bd816246914d7a319fe40438d26d07ef2"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.742583 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" event={"ID":"34a91f52-98f9-4ada-b0d1-54bba42c1035","Type":"ContainerStarted","Data":"f347d2e6072b2a67b29d383b6e1ef48a2e0e29d5d900e664ac46ff8bbbacde8a"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.750144 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" event={"ID":"db7bd96a-8494-4041-9277-93705f23849d","Type":"ContainerStarted","Data":"9d059476557d861d83b9e65c3681a96c613a5ae08543e8bd34954d1fadc8f2dc"} Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.755434 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" podUID="34a91f52-98f9-4ada-b0d1-54bba42c1035" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.760437 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" event={"ID":"9cf31fa3-bbae-4fcd-9d8a-c11a6b291642","Type":"ContainerStarted","Data":"85e5dfbeeaf0644127e1e3bacc2f3155166d72596fc3491b42b2f4a077fbd1a8"} Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.780231 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" podUID="9cf31fa3-bbae-4fcd-9d8a-c11a6b291642" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.780530 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" event={"ID":"f2964ecf-16a8-4906-a0ee-b2823ec9e9fd","Type":"ContainerStarted","Data":"1b6852d9a651068829d8de9d878a7445eafd0912b464d2dbe49bf7e3a917fa7a"} Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.789180 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" podUID="f2964ecf-16a8-4906-a0ee-b2823ec9e9fd" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.809482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" event={"ID":"ac673936-2054-4418-bb79-5aad0e79b264","Type":"ContainerStarted","Data":"a6bb7e41a4e057b1e5760fe4f00e220eab54e4e2aaa6bda62d34ccbcd96bc4c6"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.820168 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" event={"ID":"adc7591e-00e9-4ed5-9d6b-729a283cf25d","Type":"ContainerStarted","Data":"c0bf4aa1ac8706b915b364e6e7fb5ce6db64a74cbe6732af20b32cb03c3ea0a9"} Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.841702 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" podUID="adc7591e-00e9-4ed5-9d6b-729a283cf25d" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.854588 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" event={"ID":"65cd268a-1c59-4115-8732-085b07c41edf","Type":"ContainerStarted","Data":"0fdd39dceb81d92e58667a51d5a599b105c449a449b2cd6495a146f781900f38"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.861160 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" event={"ID":"eae9110d-9b14-4360-9e66-60bf84efae12","Type":"ContainerStarted","Data":"b46d6385eefba0cc9b410e901664af3ff6771cf1e81e9ca724c2e8b4a2530270"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.894285 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" event={"ID":"472840c5-9b95-4303-911c-7b27232236ea","Type":"ContainerStarted","Data":"e9faab0a9fb54e73bce056898c2b7aeac9bf179a7520c542f7c0e10ce29bafd4"} Dec 10 14:52:13 crc kubenswrapper[4727]: E1210 14:52:13.894331 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" podUID="eae9110d-9b14-4360-9e66-60bf84efae12" Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.911801 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" event={"ID":"be805dee-d0fa-4358-b461-bfd98c87bcaa","Type":"ContainerStarted","Data":"34eedd5fb0d3e8ca1dda7b593af6970b9fcb3576e06d689799fd004b7b8645b0"} Dec 10 14:52:13 crc kubenswrapper[4727]: I1210 14:52:13.920270 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" event={"ID":"862ae668-98ff-4531-9eca-6309953e1333","Type":"ContainerStarted","Data":"7164e53fa47c69eb826d72e10f453cef3dfbd6fd7df6dcd28bfa540626db652a"} Dec 10 14:52:14 crc kubenswrapper[4727]: E1210 14:52:14.936279 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" podUID="9cf31fa3-bbae-4fcd-9d8a-c11a6b291642" Dec 10 14:52:14 crc kubenswrapper[4727]: E1210 14:52:14.936436 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" podUID="365ce9e6-678d-4036-9971-3c82e553fa22" Dec 10 14:52:14 crc kubenswrapper[4727]: E1210 14:52:14.936533 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" podUID="34a91f52-98f9-4ada-b0d1-54bba42c1035" Dec 10 14:52:14 crc kubenswrapper[4727]: E1210 14:52:14.936564 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" podUID="4bb1a719-bf76-4753-9bda-e5b2a71b2f96" Dec 10 14:52:14 crc kubenswrapper[4727]: E1210 14:52:14.936561 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" podUID="f2964ecf-16a8-4906-a0ee-b2823ec9e9fd" Dec 10 14:52:14 crc kubenswrapper[4727]: E1210 14:52:14.936854 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" podUID="adc7591e-00e9-4ed5-9d6b-729a283cf25d" Dec 10 14:52:14 crc kubenswrapper[4727]: E1210 14:52:14.941021 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" podUID="eae9110d-9b14-4360-9e66-60bf84efae12" Dec 10 14:52:15 crc kubenswrapper[4727]: I1210 14:52:15.684266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:15 crc kubenswrapper[4727]: E1210 14:52:15.684580 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:15 crc kubenswrapper[4727]: E1210 14:52:15.684638 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert podName:ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e nodeName:}" failed. No retries permitted until 2025-12-10 14:52:23.684618886 +0000 UTC m=+1247.879393438 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert") pod "infra-operator-controller-manager-78d48bff9d-2jdhx" (UID: "ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:16 crc kubenswrapper[4727]: I1210 14:52:16.335542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:16 crc kubenswrapper[4727]: I1210 14:52:16.335664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:16 crc kubenswrapper[4727]: I1210 14:52:16.335722 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:16 crc kubenswrapper[4727]: E1210 14:52:16.335922 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:52:16 crc kubenswrapper[4727]: E1210 14:52:16.335939 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:16 crc kubenswrapper[4727]: E1210 14:52:16.335953 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:52:16 crc kubenswrapper[4727]: E1210 14:52:16.335999 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:24.335976719 +0000 UTC m=+1248.530751271 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "webhook-server-cert" not found Dec 10 14:52:16 crc kubenswrapper[4727]: E1210 14:52:16.336112 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert podName:492a4765-2161-48a4-a37b-2a11c919ebcf nodeName:}" failed. No retries permitted until 2025-12-10 14:52:24.336090172 +0000 UTC m=+1248.530864714 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh8tfl" (UID: "492a4765-2161-48a4-a37b-2a11c919ebcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:16 crc kubenswrapper[4727]: E1210 14:52:16.336147 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:24.336132883 +0000 UTC m=+1248.530907435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "metrics-server-cert" not found Dec 10 14:52:23 crc kubenswrapper[4727]: I1210 14:52:23.719408 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:23 crc kubenswrapper[4727]: I1210 14:52:23.754803 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2jdhx\" (UID: \"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:23 crc kubenswrapper[4727]: I1210 14:52:23.802313 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:52:24 crc kubenswrapper[4727]: I1210 14:52:24.435032 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:24 crc kubenswrapper[4727]: I1210 14:52:24.435428 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:24 crc kubenswrapper[4727]: I1210 14:52:24.435495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:24 crc kubenswrapper[4727]: E1210 14:52:24.435546 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:52:24 crc kubenswrapper[4727]: E1210 14:52:24.435626 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs podName:1149bbe0-dcd9-430b-b37a-fb145387df5f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:40.435607843 +0000 UTC m=+1264.630382385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs") pod "openstack-operator-controller-manager-5f9b77867b-r8l6w" (UID: "1149bbe0-dcd9-430b-b37a-fb145387df5f") : secret "webhook-server-cert" not found Dec 10 14:52:24 crc kubenswrapper[4727]: I1210 14:52:24.439147 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/492a4765-2161-48a4-a37b-2a11c919ebcf-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh8tfl\" (UID: \"492a4765-2161-48a4-a37b-2a11c919ebcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:24 crc kubenswrapper[4727]: I1210 14:52:24.439714 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-metrics-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:24 crc kubenswrapper[4727]: I1210 14:52:24.625624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:52:26 crc kubenswrapper[4727]: E1210 14:52:26.890233 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3" Dec 10 14:52:26 crc kubenswrapper[4727]: E1210 14:52:26.890754 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7twcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-tvzpk_openstack-operators(42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:28 crc kubenswrapper[4727]: E1210 14:52:28.502699 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 10 14:52:28 crc kubenswrapper[4727]: E1210 14:52:28.503081 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8822b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-7vs7p_openstack-operators(ac673936-2054-4418-bb79-5aad0e79b264): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:33 crc kubenswrapper[4727]: E1210 14:52:33.475844 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 10 14:52:33 crc kubenswrapper[4727]: E1210 14:52:33.476668 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hsmz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-q9gqt_openstack-operators(4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:35 crc kubenswrapper[4727]: E1210 14:52:35.627382 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Dec 10 14:52:35 crc kubenswrapper[4727]: E1210 14:52:35.628410 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th7gx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-drdt4_openstack-operators(472840c5-9b95-4303-911c-7b27232236ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:37 crc kubenswrapper[4727]: E1210 14:52:37.059630 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 10 14:52:37 crc kubenswrapper[4727]: E1210 14:52:37.060248 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjzms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-fzvpw_openstack-operators(65cd268a-1c59-4115-8732-085b07c41edf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:37 crc kubenswrapper[4727]: I1210 14:52:37.736251 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:52:37 crc kubenswrapper[4727]: I1210 14:52:37.736331 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:52:40 crc kubenswrapper[4727]: I1210 14:52:40.486695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:40 crc kubenswrapper[4727]: I1210 14:52:40.511898 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1149bbe0-dcd9-430b-b37a-fb145387df5f-webhook-certs\") pod \"openstack-operator-controller-manager-5f9b77867b-r8l6w\" (UID: \"1149bbe0-dcd9-430b-b37a-fb145387df5f\") " pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:40 crc kubenswrapper[4727]: E1210 14:52:40.595984 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 10 14:52:40 crc kubenswrapper[4727]: E1210 14:52:40.596439 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvxnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-5rmxn_openstack-operators(da47343a-d98d-4f70-bc94-ae74257914e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:40 crc kubenswrapper[4727]: I1210 14:52:40.763592 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-79kwf" Dec 10 14:52:40 crc kubenswrapper[4727]: I1210 14:52:40.774850 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:52:41 crc kubenswrapper[4727]: E1210 14:52:41.196341 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 10 14:52:41 crc kubenswrapper[4727]: E1210 14:52:41.197359 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v6w6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2mccl_openstack-operators(26bb64b8-f74e-41f5-9bca-ed451468d0ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:41 crc kubenswrapper[4727]: E1210 14:52:41.198543 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" podUID="26bb64b8-f74e-41f5-9bca-ed451468d0ab" Dec 10 14:52:42 crc kubenswrapper[4727]: E1210 14:52:42.194384 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" podUID="26bb64b8-f74e-41f5-9bca-ed451468d0ab" Dec 10 14:52:42 crc kubenswrapper[4727]: E1210 14:52:42.532755 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 10 14:52:42 crc kubenswrapper[4727]: E1210 14:52:42.533086 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp9df,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-szdjh_openstack-operators(db7bd96a-8494-4041-9277-93705f23849d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:47 crc kubenswrapper[4727]: E1210 14:52:47.599761 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 10 14:52:47 crc kubenswrapper[4727]: E1210 14:52:47.601048 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srf9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-dbvwg_openstack-operators(adc7591e-00e9-4ed5-9d6b-729a283cf25d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:53:04 crc kubenswrapper[4727]: E1210 14:53:04.549603 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 10 14:53:04 crc kubenswrapper[4727]: E1210 14:53:04.550640 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfb7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-9zpx4_openstack-operators(9cf31fa3-bbae-4fcd-9d8a-c11a6b291642): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.286636 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.286998 4727 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.287171 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4bxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5bbb8fffcc-sml6p_openstack-operators(4bb1a719-bf76-4753-9bda-e5b2a71b2f96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.332000 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.332210 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th7gx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-drdt4_openstack-operators(472840c5-9b95-4303-911c-7b27232236ea): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.332251 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.332544 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjzms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-fzvpw_openstack-operators(65cd268a-1c59-4115-8732-085b07c41edf): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.333378 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" podUID="472840c5-9b95-4303-911c-7b27232236ea" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.333544 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.333661 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" podUID="65cd268a-1c59-4115-8732-085b07c41edf" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.333767 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8822b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-7vs7p_openstack-operators(ac673936-2054-4418-bb79-5aad0e79b264): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.335362 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" podUID="ac673936-2054-4418-bb79-5aad0e79b264" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.373452 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.373644 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7twcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-tvzpk_openstack-operators(42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.374775 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" podUID="42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.389573 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.389720 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hsmz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-q9gqt_openstack-operators(4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 14:53:05 crc kubenswrapper[4727]: E1210 14:53:05.390981 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" podUID="4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5" Dec 10 14:53:05 crc kubenswrapper[4727]: I1210 14:53:05.881616 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx"] Dec 10 14:53:06 crc kubenswrapper[4727]: I1210 14:53:06.206182 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w"] Dec 10 14:53:06 crc kubenswrapper[4727]: I1210 14:53:06.271376 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl"] Dec 10 14:53:06 crc kubenswrapper[4727]: E1210 14:53:06.404213 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:06 crc kubenswrapper[4727]: E1210 14:53:06.404395 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv89s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-b2njb_openstack-operators(a1fce3d2-8cd4-4c57-95a4-57f04ff403a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:53:06 crc kubenswrapper[4727]: E1210 14:53:06.405494 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" podUID="a1fce3d2-8cd4-4c57-95a4-57f04ff403a7" Dec 10 14:53:06 crc kubenswrapper[4727]: W1210 14:53:06.406716 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad8d8c3a_b26b_47d4_a4d1_1fd318a5bf4e.slice/crio-f1f9396f5c3cd8659c64fd145695bca5229b0e601bf82937f2f3cc13e1ab670b WatchSource:0}: Error finding container f1f9396f5c3cd8659c64fd145695bca5229b0e601bf82937f2f3cc13e1ab670b: Status 404 returned error can't find the container with id f1f9396f5c3cd8659c64fd145695bca5229b0e601bf82937f2f3cc13e1ab670b Dec 10 14:53:06 crc kubenswrapper[4727]: W1210 14:53:06.409821 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1149bbe0_dcd9_430b_b37a_fb145387df5f.slice/crio-d2a141dd15eab1588ecbb4d1c7143d3af03c42ca88ec94de84cb8c6e17c79e2e WatchSource:0}: Error finding container d2a141dd15eab1588ecbb4d1c7143d3af03c42ca88ec94de84cb8c6e17c79e2e: Status 404 returned error can't find the container with id d2a141dd15eab1588ecbb4d1c7143d3af03c42ca88ec94de84cb8c6e17c79e2e Dec 10 14:53:06 crc kubenswrapper[4727]: I1210 14:53:06.443280 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" event={"ID":"1149bbe0-dcd9-430b-b37a-fb145387df5f","Type":"ContainerStarted","Data":"d2a141dd15eab1588ecbb4d1c7143d3af03c42ca88ec94de84cb8c6e17c79e2e"} Dec 10 14:53:06 crc kubenswrapper[4727]: I1210 14:53:06.444668 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" event={"ID":"492a4765-2161-48a4-a37b-2a11c919ebcf","Type":"ContainerStarted","Data":"b290110ebca26b70d85354c1ea4ff7aba35ac1e1303a126e3dacfc2f2636ef34"} Dec 10 14:53:06 crc kubenswrapper[4727]: I1210 14:53:06.446859 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" event={"ID":"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e","Type":"ContainerStarted","Data":"f1f9396f5c3cd8659c64fd145695bca5229b0e601bf82937f2f3cc13e1ab670b"} Dec 10 14:53:07 crc kubenswrapper[4727]: I1210 14:53:07.525852 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" event={"ID":"14d03018-c372-4ede-bb5d-47efd53f4d51","Type":"ContainerStarted","Data":"fa5e9aa22677d110502808ff71088033b518e5021802f571794bd2fdfa677bd5"} Dec 10 14:53:07 crc kubenswrapper[4727]: I1210 14:53:07.527075 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" event={"ID":"862ae668-98ff-4531-9eca-6309953e1333","Type":"ContainerStarted","Data":"7686c32670997e947799a73ed9dd3c62738be42c69d862d0a2ef4529fb1faa97"} Dec 10 14:53:07 crc kubenswrapper[4727]: I1210 14:53:07.527975 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" event={"ID":"713c2d47-7281-46d4-bbcd-16fba5161b5a","Type":"ContainerStarted","Data":"5bd656bc2759b68621530a56807660de3a96f1eb56cac36d29c0bcb1b9b6492b"} Dec 10 14:53:07 crc kubenswrapper[4727]: I1210 14:53:07.530566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" event={"ID":"be805dee-d0fa-4358-b461-bfd98c87bcaa","Type":"ContainerStarted","Data":"4d7626d28752bc922820e794070600a147e8ccf3506a11137524ac02483bec2e"} Dec 10 14:53:07 crc kubenswrapper[4727]: I1210 14:53:07.934427 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:53:07 crc kubenswrapper[4727]: I1210 14:53:07.934494 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:53:07 crc kubenswrapper[4727]: I1210 14:53:07.934540 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:53:07 crc kubenswrapper[4727]: I1210 14:53:07.935183 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf0c0cb5db6cbb369cba9f7cbcfb4667db68ee6a05b492c3d6b69303943d84f1"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:53:07 crc kubenswrapper[4727]: I1210 14:53:07.935243 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://bf0c0cb5db6cbb369cba9f7cbcfb4667db68ee6a05b492c3d6b69303943d84f1" gracePeriod=600 Dec 10 14:53:08 crc kubenswrapper[4727]: I1210 14:53:08.551803 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="bf0c0cb5db6cbb369cba9f7cbcfb4667db68ee6a05b492c3d6b69303943d84f1" exitCode=0 Dec 10 14:53:08 crc kubenswrapper[4727]: I1210 14:53:08.551884 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"bf0c0cb5db6cbb369cba9f7cbcfb4667db68ee6a05b492c3d6b69303943d84f1"} Dec 10 14:53:08 crc kubenswrapper[4727]: I1210 14:53:08.552178 4727 scope.go:117] "RemoveContainer" containerID="d4da8f537ad153791693a45193621652b13001e5dd72906f744d548921ad04f8" Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.593302 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"9b2d38dbef40e687b846e527a023d87ca5607929b3a7329d3334ac26ab387fb1"} Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.604122 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" event={"ID":"26bb64b8-f74e-41f5-9bca-ed451468d0ab","Type":"ContainerStarted","Data":"2f56395d07f63aeb8bf21d39322eed5e460c5950d598d7c0d0784ab8c49b5fbe"} Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.608114 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" event={"ID":"34a91f52-98f9-4ada-b0d1-54bba42c1035","Type":"ContainerStarted","Data":"8a59c466c0a8f9d0e67d576889dc6f4411764321ce36d425ae1ff10c5019501a"} Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.620161 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" event={"ID":"365ce9e6-678d-4036-9971-3c82e553fa22","Type":"ContainerStarted","Data":"fcd3d7ccc9cbffaad8affd258d0951b945888b20a63c208a0ffa3b7659cec616"} Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.624527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" event={"ID":"eae9110d-9b14-4360-9e66-60bf84efae12","Type":"ContainerStarted","Data":"10ec1d7ff687c8412ff609cdc7a5eb6c90cbacb8fe975429bc7ac787a6e896d4"} Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.633325 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" event={"ID":"f2964ecf-16a8-4906-a0ee-b2823ec9e9fd","Type":"ContainerStarted","Data":"163d0918343fe0bf247c663f8ce8e2561ca0f7cc9bd83541395bca9035459771"} Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.647726 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" event={"ID":"1149bbe0-dcd9-430b-b37a-fb145387df5f","Type":"ContainerStarted","Data":"2f2b3223a90d2848c866b5b167b29efd907dcb2e3764480911a0918a41faca98"} Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.648445 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.650433 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2mccl" podStartSLOduration=9.527504644 podStartE2EDuration="1m2.65040363s" podCreationTimestamp="2025-12-10 14:52:08 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.303271917 +0000 UTC m=+1237.498046459" lastFinishedPulling="2025-12-10 14:53:06.426170903 +0000 UTC m=+1290.620945445" observedRunningTime="2025-12-10 14:53:10.644709116 +0000 UTC m=+1294.839483658" watchObservedRunningTime="2025-12-10 14:53:10.65040363 +0000 UTC m=+1294.845178172" Dec 10 14:53:10 crc kubenswrapper[4727]: I1210 14:53:10.652035 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" event={"ID":"ac673936-2054-4418-bb79-5aad0e79b264","Type":"ContainerStarted","Data":"ac26f055c6ce067d8419163aea44491d30b4aec6c71e3fdbf6d9ad209933c65f"} Dec 10 14:53:11 crc kubenswrapper[4727]: I1210 14:53:11.068852 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" podStartSLOduration=63.068829038 podStartE2EDuration="1m3.068829038s" podCreationTimestamp="2025-12-10 14:52:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:53:11.067266619 +0000 UTC m=+1295.262041161" watchObservedRunningTime="2025-12-10 14:53:11.068829038 +0000 UTC m=+1295.263603580" Dec 10 14:53:16 crc kubenswrapper[4727]: I1210 14:53:16.700487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" event={"ID":"65cd268a-1c59-4115-8732-085b07c41edf","Type":"ContainerStarted","Data":"38e608751c1ea8120524cd7790135b1e43efbef1f750c73aec98250876d75f02"} Dec 10 14:53:17 crc kubenswrapper[4727]: I1210 14:53:17.708357 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" event={"ID":"4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5","Type":"ContainerStarted","Data":"c746292f06f37e6bf85e3ecd1979a98ea887dc43c89c84ee4c1fb126b0d337cf"} Dec 10 14:53:17 crc kubenswrapper[4727]: I1210 14:53:17.710430 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" event={"ID":"472840c5-9b95-4303-911c-7b27232236ea","Type":"ContainerStarted","Data":"8ec534fb57c99f0ad90bace5721b23c7c05fc9c768b92bfa2f580e5038cd490c"} Dec 10 14:53:18 crc kubenswrapper[4727]: E1210 14:53:18.217266 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" podUID="db7bd96a-8494-4041-9277-93705f23849d" Dec 10 14:53:18 crc kubenswrapper[4727]: E1210 14:53:18.697857 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" podUID="9cf31fa3-bbae-4fcd-9d8a-c11a6b291642" Dec 10 14:53:18 crc kubenswrapper[4727]: E1210 14:53:18.700482 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" podUID="adc7591e-00e9-4ed5-9d6b-729a283cf25d" Dec 10 14:53:18 crc kubenswrapper[4727]: I1210 14:53:18.718949 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" event={"ID":"db7bd96a-8494-4041-9277-93705f23849d","Type":"ContainerStarted","Data":"2c8254a6e23549d4d87447cb13d0804570a98f006da3949ebae50179bf1e7625"} Dec 10 14:53:18 crc kubenswrapper[4727]: I1210 14:53:18.721458 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" event={"ID":"9cf31fa3-bbae-4fcd-9d8a-c11a6b291642","Type":"ContainerStarted","Data":"15cae18f90436bc40bbbab58c50d5c90a28e2fee6738728e0e350f7fc2fa66b5"} Dec 10 14:53:18 crc kubenswrapper[4727]: E1210 14:53:18.722449 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" podUID="9cf31fa3-bbae-4fcd-9d8a-c11a6b291642" Dec 10 14:53:18 crc kubenswrapper[4727]: I1210 14:53:18.803855 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" event={"ID":"42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9","Type":"ContainerStarted","Data":"a90f45ec5debc7842af5a29f52c974d52900312d562dec2649cb6ca4d198c0c7"} Dec 10 14:53:18 crc kubenswrapper[4727]: I1210 14:53:18.812189 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" event={"ID":"adc7591e-00e9-4ed5-9d6b-729a283cf25d","Type":"ContainerStarted","Data":"4f567984f9476236c88429d930b43b023759628138e506bea24feb2dd524656e"} Dec 10 14:53:18 crc kubenswrapper[4727]: I1210 14:53:18.816881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" event={"ID":"14d03018-c372-4ede-bb5d-47efd53f4d51","Type":"ContainerStarted","Data":"0edca2bf865d8163114a4c42cef4c5120857a21b1b781a7443874c9d9d6165ae"} Dec 10 14:53:18 crc kubenswrapper[4727]: I1210 14:53:18.817680 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" Dec 10 14:53:18 crc kubenswrapper[4727]: I1210 14:53:18.820084 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" Dec 10 14:53:18 crc kubenswrapper[4727]: I1210 14:53:18.882059 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-dwczj" podStartSLOduration=16.048693801 podStartE2EDuration="1m11.882024637s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:09.16933435 +0000 UTC m=+1233.364108892" lastFinishedPulling="2025-12-10 14:53:05.002665186 +0000 UTC m=+1289.197439728" observedRunningTime="2025-12-10 14:53:18.879071032 +0000 UTC m=+1303.073845574" watchObservedRunningTime="2025-12-10 14:53:18.882024637 +0000 UTC m=+1303.076799179" Dec 10 14:53:18 crc kubenswrapper[4727]: E1210 14:53:18.995408 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" podUID="4bb1a719-bf76-4753-9bda-e5b2a71b2f96" Dec 10 14:53:19 crc kubenswrapper[4727]: E1210 14:53:19.069446 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" podUID="da47343a-d98d-4f70-bc94-ae74257914e2" Dec 10 14:53:19 crc kubenswrapper[4727]: I1210 14:53:19.827719 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" event={"ID":"4bb1a719-bf76-4753-9bda-e5b2a71b2f96","Type":"ContainerStarted","Data":"dca5aa61af365c6e15314179635581554ac174d4ce403667098543bc9f6d7f92"} Dec 10 14:53:19 crc kubenswrapper[4727]: E1210 14:53:19.829558 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" podUID="4bb1a719-bf76-4753-9bda-e5b2a71b2f96" Dec 10 14:53:19 crc kubenswrapper[4727]: I1210 14:53:19.830723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" event={"ID":"ac673936-2054-4418-bb79-5aad0e79b264","Type":"ContainerStarted","Data":"df4492db83f3e8ad9509070f645b8e47b4f2356f5b3846e1de0c6779740b21d5"} Dec 10 14:53:19 crc kubenswrapper[4727]: I1210 14:53:19.831748 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" Dec 10 14:53:19 crc kubenswrapper[4727]: I1210 14:53:19.834755 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" event={"ID":"492a4765-2161-48a4-a37b-2a11c919ebcf","Type":"ContainerStarted","Data":"1a1e194c0d7a5ac27d6330537366f58ee047a64e7d1355eca5a01fc7871b71c9"} Dec 10 14:53:19 crc kubenswrapper[4727]: I1210 14:53:19.835759 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" Dec 10 14:53:19 crc kubenswrapper[4727]: I1210 14:53:19.837093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" event={"ID":"da47343a-d98d-4f70-bc94-ae74257914e2","Type":"ContainerStarted","Data":"26a131e394066cb067689a2a8c8fa28771dfb5fff1d18bfb58c865d00b9f6117"} Dec 10 14:53:19 crc kubenswrapper[4727]: I1210 14:53:19.880603 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7vs7p" podStartSLOduration=19.021014497 podStartE2EDuration="1m12.880574329s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.09753292 +0000 UTC m=+1237.292307462" lastFinishedPulling="2025-12-10 14:53:06.957092752 +0000 UTC m=+1291.151867294" observedRunningTime="2025-12-10 14:53:19.873188802 +0000 UTC m=+1304.067963354" watchObservedRunningTime="2025-12-10 14:53:19.880574329 +0000 UTC m=+1304.075348871" Dec 10 14:53:20 crc kubenswrapper[4727]: I1210 14:53:20.781498 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5f9b77867b-r8l6w" Dec 10 14:53:20 crc kubenswrapper[4727]: I1210 14:53:20.851541 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" event={"ID":"be805dee-d0fa-4358-b461-bfd98c87bcaa","Type":"ContainerStarted","Data":"c76e846007cb413ad26492f0141dc972eb5bc018732e007630615a7f0570a3d9"} Dec 10 14:53:21 crc kubenswrapper[4727]: I1210 14:53:21.861013 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" event={"ID":"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e","Type":"ContainerStarted","Data":"03ac0845bed89ccdafe368d9d5a1a2913814b4c786bb1a4b47fc8b1d344f0c03"} Dec 10 14:53:24 crc kubenswrapper[4727]: I1210 14:53:24.885358 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" event={"ID":"472840c5-9b95-4303-911c-7b27232236ea","Type":"ContainerStarted","Data":"870bc1b7303eafad1bafde4770bd75ac4b49fe3ea0bb13ecf233328665ae483d"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.896226 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" event={"ID":"4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5","Type":"ContainerStarted","Data":"9acc5d0023d9668e9105feecc5f700a9b6d166a1e59551579579b7efe515cbdc"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.896704 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.898835 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.899348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" event={"ID":"42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9","Type":"ContainerStarted","Data":"de34c2ebb97dbc59dff53c49556924132525e8d71f2a0268e8f1c8bb8a779172"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.899607 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.901386 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.902134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" event={"ID":"65cd268a-1c59-4115-8732-085b07c41edf","Type":"ContainerStarted","Data":"53723792047c77ebec4b4b2a985a8ac63ebc96dcc0d9e581153a813f3d6f529b"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.902666 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.904051 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.904535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" event={"ID":"862ae668-98ff-4531-9eca-6309953e1333","Type":"ContainerStarted","Data":"864840551e3bab9dd485308f0df9e6d02bca616824fe6651894ee4bef5b86ac8"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.905791 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.908338 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" event={"ID":"365ce9e6-678d-4036-9971-3c82e553fa22","Type":"ContainerStarted","Data":"fd29afa6bf8b5a78b7eb7d7dcfa31ceec10c9a192042392f5f57965c3eaed713"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.908426 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.908918 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.910883 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.912623 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" event={"ID":"713c2d47-7281-46d4-bbcd-16fba5161b5a","Type":"ContainerStarted","Data":"c572e77b6a8f84163f0454137e7d6cf7e532b9dba8bfddedd02f4a5eb35e11a4"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.912773 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.914526 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.919644 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" event={"ID":"eae9110d-9b14-4360-9e66-60bf84efae12","Type":"ContainerStarted","Data":"b4f150215417b26231bccfb669b22ac0aba7be70126390191ca30b931248f950"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.919815 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.921523 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-q9gqt" podStartSLOduration=23.186950162 podStartE2EDuration="1m18.921510484s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.097956021 +0000 UTC m=+1237.292730563" lastFinishedPulling="2025-12-10 14:53:08.832516343 +0000 UTC m=+1293.027290885" observedRunningTime="2025-12-10 14:53:25.917783519 +0000 UTC m=+1310.112558071" watchObservedRunningTime="2025-12-10 14:53:25.921510484 +0000 UTC m=+1310.116285026" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.922769 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" event={"ID":"f2964ecf-16a8-4906-a0ee-b2823ec9e9fd","Type":"ContainerStarted","Data":"084f4266733f83b8e4a3fab6ef3eaf4758b1dc32e2e8bb549c4542c5c79d6415"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.923175 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.923646 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.926020 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.928833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" event={"ID":"a1fce3d2-8cd4-4c57-95a4-57f04ff403a7","Type":"ContainerStarted","Data":"5c062d59c93f043e1e98f3984155fab1e519a92a95d35876feed1b2ac2e7f2f3"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.931043 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" event={"ID":"34a91f52-98f9-4ada-b0d1-54bba42c1035","Type":"ContainerStarted","Data":"27b2df4bff1760a456ad93cd080a6966ffbc032d43c8f996029f9ec4c0b6ed98"} Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.932088 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.932107 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.932118 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.935045 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.937741 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.942294 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" Dec 10 14:53:25 crc kubenswrapper[4727]: I1210 14:53:25.952292 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-tvzpk" podStartSLOduration=23.200087933 podStartE2EDuration="1m18.9522631s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.104411924 +0000 UTC m=+1237.299186466" lastFinishedPulling="2025-12-10 14:53:08.856587101 +0000 UTC m=+1293.051361633" observedRunningTime="2025-12-10 14:53:25.942543325 +0000 UTC m=+1310.137317877" watchObservedRunningTime="2025-12-10 14:53:25.9522631 +0000 UTC m=+1310.147037652" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.274539 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-m2c89" podStartSLOduration=27.244461299 podStartE2EDuration="1m19.27451352s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.343846222 +0000 UTC m=+1237.538620764" lastFinishedPulling="2025-12-10 14:53:05.373898443 +0000 UTC m=+1289.568672985" observedRunningTime="2025-12-10 14:53:26.273522345 +0000 UTC m=+1310.468296887" watchObservedRunningTime="2025-12-10 14:53:26.27451352 +0000 UTC m=+1310.469288062" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.346126 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-fzvpw" podStartSLOduration=23.575313182 podStartE2EDuration="1m19.346094488s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.022927567 +0000 UTC m=+1237.217702109" lastFinishedPulling="2025-12-10 14:53:08.793708883 +0000 UTC m=+1292.988483415" observedRunningTime="2025-12-10 14:53:26.331779346 +0000 UTC m=+1310.526553888" watchObservedRunningTime="2025-12-10 14:53:26.346094488 +0000 UTC m=+1310.540869050" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.373017 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-tmlwp" podStartSLOduration=14.311015141 podStartE2EDuration="1m19.372992538s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.04844918 +0000 UTC m=+1237.243223722" lastFinishedPulling="2025-12-10 14:53:18.110426577 +0000 UTC m=+1302.305201119" observedRunningTime="2025-12-10 14:53:26.370537155 +0000 UTC m=+1310.565311697" watchObservedRunningTime="2025-12-10 14:53:26.372992538 +0000 UTC m=+1310.567767080" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.407365 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-5s987" podStartSLOduration=15.290051949 podStartE2EDuration="1m19.407343085s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.02348044 +0000 UTC m=+1237.218254982" lastFinishedPulling="2025-12-10 14:53:17.140771576 +0000 UTC m=+1301.335546118" observedRunningTime="2025-12-10 14:53:26.392563852 +0000 UTC m=+1310.587338404" watchObservedRunningTime="2025-12-10 14:53:26.407343085 +0000 UTC m=+1310.602117627" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.432949 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jx2r4" podStartSLOduration=27.332265806 podStartE2EDuration="1m19.432894881s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.435729952 +0000 UTC m=+1237.630504494" lastFinishedPulling="2025-12-10 14:53:05.536359027 +0000 UTC m=+1289.731133569" observedRunningTime="2025-12-10 14:53:26.425421502 +0000 UTC m=+1310.620196044" watchObservedRunningTime="2025-12-10 14:53:26.432894881 +0000 UTC m=+1310.627669423" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.480993 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" podStartSLOduration=27.542505007 podStartE2EDuration="1m19.480975625s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.435507667 +0000 UTC m=+1237.630282209" lastFinishedPulling="2025-12-10 14:53:05.373978285 +0000 UTC m=+1289.568752827" observedRunningTime="2025-12-10 14:53:26.455147433 +0000 UTC m=+1310.649921975" watchObservedRunningTime="2025-12-10 14:53:26.480975625 +0000 UTC m=+1310.675750187" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.482325 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-nbjpv" podStartSLOduration=14.34736927 podStartE2EDuration="1m19.482318199s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:12.981937981 +0000 UTC m=+1237.176712523" lastFinishedPulling="2025-12-10 14:53:18.11688691 +0000 UTC m=+1302.311661452" observedRunningTime="2025-12-10 14:53:26.476346628 +0000 UTC m=+1310.671121170" watchObservedRunningTime="2025-12-10 14:53:26.482318199 +0000 UTC m=+1310.677092741" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.498020 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-hqsbc" podStartSLOduration=27.5331451 podStartE2EDuration="1m19.497993435s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.425269998 +0000 UTC m=+1237.620044540" lastFinishedPulling="2025-12-10 14:53:05.390118333 +0000 UTC m=+1289.584892875" observedRunningTime="2025-12-10 14:53:26.494144668 +0000 UTC m=+1310.688919210" watchObservedRunningTime="2025-12-10 14:53:26.497993435 +0000 UTC m=+1310.692767977" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.519413 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-drdt4" podStartSLOduration=23.722456818 podStartE2EDuration="1m19.519393255s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.023831299 +0000 UTC m=+1237.218605841" lastFinishedPulling="2025-12-10 14:53:08.820767736 +0000 UTC m=+1293.015542278" observedRunningTime="2025-12-10 14:53:26.51602068 +0000 UTC m=+1310.710795222" watchObservedRunningTime="2025-12-10 14:53:26.519393255 +0000 UTC m=+1310.714167807" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.940377 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" event={"ID":"a1fce3d2-8cd4-4c57-95a4-57f04ff403a7","Type":"ContainerStarted","Data":"f2a08faefda0ce1c91d45bfa8ffecdcbb595ba8fe5f9111c2a569db1dc9379a9"} Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.941137 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.942757 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" event={"ID":"adc7591e-00e9-4ed5-9d6b-729a283cf25d","Type":"ContainerStarted","Data":"13e11db88130ca4cbc2e756722116ee7201251d6cc9ac77839664db888b80a61"} Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.943618 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.946624 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" event={"ID":"492a4765-2161-48a4-a37b-2a11c919ebcf","Type":"ContainerStarted","Data":"32c1b858d3d95a8408fba5c0c61658523fc4f31c1204d34c7c11d489e3dc3a05"} Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.960332 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.967458 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" Dec 10 14:53:26 crc kubenswrapper[4727]: I1210 14:53:26.976225 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" podStartSLOduration=15.143806197 podStartE2EDuration="1m19.976207854s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.381016721 +0000 UTC m=+1237.575791263" lastFinishedPulling="2025-12-10 14:53:18.213418378 +0000 UTC m=+1302.408192920" observedRunningTime="2025-12-10 14:53:26.967983376 +0000 UTC m=+1311.162757928" watchObservedRunningTime="2025-12-10 14:53:26.976207854 +0000 UTC m=+1311.170982396" Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.006343 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh8tfl" podStartSLOduration=68.327946349 podStartE2EDuration="1m20.006321125s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:53:06.431967209 +0000 UTC m=+1290.626741751" lastFinishedPulling="2025-12-10 14:53:18.110341985 +0000 UTC m=+1302.305116527" observedRunningTime="2025-12-10 14:53:26.997592084 +0000 UTC m=+1311.192366626" watchObservedRunningTime="2025-12-10 14:53:27.006321125 +0000 UTC m=+1311.201095657" Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.954807 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" event={"ID":"ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e","Type":"ContainerStarted","Data":"3e5c819f9c08296174887fa96f7dde38c80e266ea4a746f1736e781102bf11c6"} Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.955287 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.958791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" event={"ID":"db7bd96a-8494-4041-9277-93705f23849d","Type":"ContainerStarted","Data":"fe977be6e69dd7e92b734cb64b56ea645d057bec9a2fe8edefe26181a3f72304"} Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.958969 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.961163 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" event={"ID":"da47343a-d98d-4f70-bc94-ae74257914e2","Type":"ContainerStarted","Data":"58175c43b395e5e606162225cbd6e78f4d33d10376335051c1e52a039858fcfe"} Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.961392 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.961899 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.975229 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" podStartSLOduration=8.368643746 podStartE2EDuration="1m20.975215088s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.307445172 +0000 UTC m=+1237.502219714" lastFinishedPulling="2025-12-10 14:53:25.914016524 +0000 UTC m=+1310.108791056" observedRunningTime="2025-12-10 14:53:27.043215396 +0000 UTC m=+1311.237989958" watchObservedRunningTime="2025-12-10 14:53:27.975215088 +0000 UTC m=+1312.169989630" Dec 10 14:53:27 crc kubenswrapper[4727]: I1210 14:53:27.977043 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2jdhx" podStartSLOduration=69.274085517 podStartE2EDuration="1m20.977036764s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:53:06.410380744 +0000 UTC m=+1290.605155286" lastFinishedPulling="2025-12-10 14:53:18.113331991 +0000 UTC m=+1302.308106533" observedRunningTime="2025-12-10 14:53:27.974335325 +0000 UTC m=+1312.169109877" watchObservedRunningTime="2025-12-10 14:53:27.977036764 +0000 UTC m=+1312.171811306" Dec 10 14:53:28 crc kubenswrapper[4727]: I1210 14:53:28.015575 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" podStartSLOduration=8.182681148 podStartE2EDuration="1m21.015546896s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.079253608 +0000 UTC m=+1237.274028150" lastFinishedPulling="2025-12-10 14:53:25.912119356 +0000 UTC m=+1310.106893898" observedRunningTime="2025-12-10 14:53:27.997266475 +0000 UTC m=+1312.192041007" watchObservedRunningTime="2025-12-10 14:53:28.015546896 +0000 UTC m=+1312.210321438" Dec 10 14:53:28 crc kubenswrapper[4727]: I1210 14:53:28.016944 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" podStartSLOduration=7.130989085 podStartE2EDuration="1m21.016936032s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.183551693 +0000 UTC m=+1237.378326235" lastFinishedPulling="2025-12-10 14:53:27.06949864 +0000 UTC m=+1311.264273182" observedRunningTime="2025-12-10 14:53:28.014469999 +0000 UTC m=+1312.209244541" watchObservedRunningTime="2025-12-10 14:53:28.016936032 +0000 UTC m=+1312.211710574" Dec 10 14:53:34 crc kubenswrapper[4727]: I1210 14:53:34.017646 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" event={"ID":"9cf31fa3-bbae-4fcd-9d8a-c11a6b291642","Type":"ContainerStarted","Data":"52d2a92f1fbfaa3928cb7040d1e030b0596dfc2991a86252b17d58e963ecaeb2"} Dec 10 14:53:34 crc kubenswrapper[4727]: I1210 14:53:34.019356 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" Dec 10 14:53:34 crc kubenswrapper[4727]: I1210 14:53:34.019450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" event={"ID":"4bb1a719-bf76-4753-9bda-e5b2a71b2f96","Type":"ContainerStarted","Data":"e297698c73c6e922a9b84804c4dbfa270b88ba1e40665482cf433f0101f066db"} Dec 10 14:53:34 crc kubenswrapper[4727]: I1210 14:53:34.019710 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" Dec 10 14:53:34 crc kubenswrapper[4727]: I1210 14:53:34.044211 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" podStartSLOduration=7.107768157 podStartE2EDuration="1m27.04418689s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.425426872 +0000 UTC m=+1237.620201404" lastFinishedPulling="2025-12-10 14:53:33.361845595 +0000 UTC m=+1317.556620137" observedRunningTime="2025-12-10 14:53:34.041875952 +0000 UTC m=+1318.236650524" watchObservedRunningTime="2025-12-10 14:53:34.04418689 +0000 UTC m=+1318.238961432" Dec 10 14:53:34 crc kubenswrapper[4727]: I1210 14:53:34.073761 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" podStartSLOduration=7.483381015 podStartE2EDuration="1m27.073743007s" podCreationTimestamp="2025-12-10 14:52:07 +0000 UTC" firstStartedPulling="2025-12-10 14:52:13.334113786 +0000 UTC m=+1237.528888328" lastFinishedPulling="2025-12-10 14:53:32.924475778 +0000 UTC m=+1317.119250320" observedRunningTime="2025-12-10 14:53:34.067654693 +0000 UTC m=+1318.262429265" watchObservedRunningTime="2025-12-10 14:53:34.073743007 +0000 UTC m=+1318.268517539" Dec 10 14:53:38 crc kubenswrapper[4727]: I1210 14:53:38.053579 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-szdjh" Dec 10 14:53:38 crc kubenswrapper[4727]: I1210 14:53:38.060560 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5rmxn" Dec 10 14:53:38 crc kubenswrapper[4727]: I1210 14:53:38.573255 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9zpx4" Dec 10 14:53:38 crc kubenswrapper[4727]: I1210 14:53:38.573321 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-b2njb" Dec 10 14:53:38 crc kubenswrapper[4727]: I1210 14:53:38.615790 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5bbb8fffcc-sml6p" Dec 10 14:53:38 crc kubenswrapper[4727]: I1210 14:53:38.624302 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-dbvwg" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.582056 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wkmqx"] Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.584341 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.588742 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.589045 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-j9z2s" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.589214 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.592006 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wkmqx"] Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.601300 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.679886 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rbx\" (UniqueName: \"kubernetes.io/projected/5e99022a-2268-4c86-8192-9af3f5abf6e5-kube-api-access-p7rbx\") pod \"dnsmasq-dns-675f4bcbfc-wkmqx\" (UID: \"5e99022a-2268-4c86-8192-9af3f5abf6e5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.679975 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e99022a-2268-4c86-8192-9af3f5abf6e5-config\") pod \"dnsmasq-dns-675f4bcbfc-wkmqx\" (UID: \"5e99022a-2268-4c86-8192-9af3f5abf6e5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.726794 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m4jlc"] Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.728083 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.730224 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.749989 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m4jlc"] Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.781979 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7rbx\" (UniqueName: \"kubernetes.io/projected/5e99022a-2268-4c86-8192-9af3f5abf6e5-kube-api-access-p7rbx\") pod \"dnsmasq-dns-675f4bcbfc-wkmqx\" (UID: \"5e99022a-2268-4c86-8192-9af3f5abf6e5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.782393 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e99022a-2268-4c86-8192-9af3f5abf6e5-config\") pod \"dnsmasq-dns-675f4bcbfc-wkmqx\" (UID: \"5e99022a-2268-4c86-8192-9af3f5abf6e5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.783545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e99022a-2268-4c86-8192-9af3f5abf6e5-config\") pod \"dnsmasq-dns-675f4bcbfc-wkmqx\" (UID: \"5e99022a-2268-4c86-8192-9af3f5abf6e5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.810018 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7rbx\" (UniqueName: \"kubernetes.io/projected/5e99022a-2268-4c86-8192-9af3f5abf6e5-kube-api-access-p7rbx\") pod \"dnsmasq-dns-675f4bcbfc-wkmqx\" (UID: \"5e99022a-2268-4c86-8192-9af3f5abf6e5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.885553 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-config\") pod \"dnsmasq-dns-78dd6ddcc-m4jlc\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.885649 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtk9\" (UniqueName: \"kubernetes.io/projected/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-kube-api-access-7vtk9\") pod \"dnsmasq-dns-78dd6ddcc-m4jlc\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.885821 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-m4jlc\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.915203 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.987293 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-config\") pod \"dnsmasq-dns-78dd6ddcc-m4jlc\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.987409 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtk9\" (UniqueName: \"kubernetes.io/projected/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-kube-api-access-7vtk9\") pod \"dnsmasq-dns-78dd6ddcc-m4jlc\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.987494 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-m4jlc\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.988621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-m4jlc\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:58 crc kubenswrapper[4727]: I1210 14:53:58.988652 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-config\") pod \"dnsmasq-dns-78dd6ddcc-m4jlc\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:59 crc kubenswrapper[4727]: I1210 14:53:59.012843 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtk9\" (UniqueName: \"kubernetes.io/projected/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-kube-api-access-7vtk9\") pod \"dnsmasq-dns-78dd6ddcc-m4jlc\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:59 crc kubenswrapper[4727]: I1210 14:53:59.048692 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:53:59 crc kubenswrapper[4727]: I1210 14:53:59.384965 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wkmqx"] Dec 10 14:53:59 crc kubenswrapper[4727]: W1210 14:53:59.390738 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e99022a_2268_4c86_8192_9af3f5abf6e5.slice/crio-958178e72e12e42cdf096a0dbeadc3064e83f5b23ff914c1dc544a84ab69c21e WatchSource:0}: Error finding container 958178e72e12e42cdf096a0dbeadc3064e83f5b23ff914c1dc544a84ab69c21e: Status 404 returned error can't find the container with id 958178e72e12e42cdf096a0dbeadc3064e83f5b23ff914c1dc544a84ab69c21e Dec 10 14:53:59 crc kubenswrapper[4727]: I1210 14:53:59.533035 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m4jlc"] Dec 10 14:53:59 crc kubenswrapper[4727]: W1210 14:53:59.536800 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda229e6d8_d8d6_40bd_9788_8ba7c86602b4.slice/crio-269460f6b21e7e224a87508daeb98d1ee1a7c38fb94be77d7dda2cc098223705 WatchSource:0}: Error finding container 269460f6b21e7e224a87508daeb98d1ee1a7c38fb94be77d7dda2cc098223705: Status 404 returned error can't find the container with id 269460f6b21e7e224a87508daeb98d1ee1a7c38fb94be77d7dda2cc098223705 Dec 10 14:54:00 crc kubenswrapper[4727]: I1210 14:54:00.238191 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" event={"ID":"a229e6d8-d8d6-40bd-9788-8ba7c86602b4","Type":"ContainerStarted","Data":"269460f6b21e7e224a87508daeb98d1ee1a7c38fb94be77d7dda2cc098223705"} Dec 10 14:54:00 crc kubenswrapper[4727]: I1210 14:54:00.241355 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" event={"ID":"5e99022a-2268-4c86-8192-9af3f5abf6e5","Type":"ContainerStarted","Data":"958178e72e12e42cdf096a0dbeadc3064e83f5b23ff914c1dc544a84ab69c21e"} Dec 10 14:54:01 crc kubenswrapper[4727]: I1210 14:54:01.769840 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wkmqx"] Dec 10 14:54:01 crc kubenswrapper[4727]: I1210 14:54:01.802824 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fctzl"] Dec 10 14:54:01 crc kubenswrapper[4727]: I1210 14:54:01.804605 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:01 crc kubenswrapper[4727]: I1210 14:54:01.826152 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fctzl"] Dec 10 14:54:01 crc kubenswrapper[4727]: I1210 14:54:01.928727 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-config\") pod \"dnsmasq-dns-666b6646f7-fctzl\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:01 crc kubenswrapper[4727]: I1210 14:54:01.928785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fctzl\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:01 crc kubenswrapper[4727]: I1210 14:54:01.928847 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lkxj\" (UniqueName: \"kubernetes.io/projected/6cc8350a-ea00-40a2-915f-3337cd27c244-kube-api-access-9lkxj\") pod \"dnsmasq-dns-666b6646f7-fctzl\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.030790 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-config\") pod \"dnsmasq-dns-666b6646f7-fctzl\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.030835 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fctzl\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.030881 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lkxj\" (UniqueName: \"kubernetes.io/projected/6cc8350a-ea00-40a2-915f-3337cd27c244-kube-api-access-9lkxj\") pod \"dnsmasq-dns-666b6646f7-fctzl\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.032239 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-config\") pod \"dnsmasq-dns-666b6646f7-fctzl\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.032779 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fctzl\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.087003 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lkxj\" (UniqueName: \"kubernetes.io/projected/6cc8350a-ea00-40a2-915f-3337cd27c244-kube-api-access-9lkxj\") pod \"dnsmasq-dns-666b6646f7-fctzl\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.131830 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.164657 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m4jlc"] Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.206431 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7d78q"] Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.207980 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.227730 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7d78q"] Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.335886 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7d78q\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.335955 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-config\") pod \"dnsmasq-dns-57d769cc4f-7d78q\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.336170 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfmh\" (UniqueName: \"kubernetes.io/projected/0d6fded7-3029-48a5-96b8-6f8296acd34c-kube-api-access-xwfmh\") pod \"dnsmasq-dns-57d769cc4f-7d78q\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.439279 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-config\") pod \"dnsmasq-dns-57d769cc4f-7d78q\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.439392 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfmh\" (UniqueName: \"kubernetes.io/projected/0d6fded7-3029-48a5-96b8-6f8296acd34c-kube-api-access-xwfmh\") pod \"dnsmasq-dns-57d769cc4f-7d78q\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.439454 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7d78q\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.442205 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-config\") pod \"dnsmasq-dns-57d769cc4f-7d78q\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.444272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7d78q\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.464197 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfmh\" (UniqueName: \"kubernetes.io/projected/0d6fded7-3029-48a5-96b8-6f8296acd34c-kube-api-access-xwfmh\") pod \"dnsmasq-dns-57d769cc4f-7d78q\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.573694 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:02 crc kubenswrapper[4727]: I1210 14:54:02.756597 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fctzl"] Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.022256 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.024493 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.038249 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.038468 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.038649 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.038829 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gl8gb" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.039079 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.039258 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.040247 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.059969 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218198 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218538 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218569 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218629 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218659 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz5s9\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-kube-api-access-nz5s9\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218702 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28ce859f-f595-4f9a-ad5d-1131acd951c7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28ce859f-f595-4f9a-ad5d-1131acd951c7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218790 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218829 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.218925 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-config-data\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.320881 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.320980 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321034 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321056 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-config-data\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321180 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321216 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz5s9\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-kube-api-access-nz5s9\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321336 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28ce859f-f595-4f9a-ad5d-1131acd951c7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321362 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28ce859f-f595-4f9a-ad5d-1131acd951c7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.321737 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.323048 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.340661 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.340678 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28ce859f-f595-4f9a-ad5d-1131acd951c7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.341295 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.341962 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.342858 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28ce859f-f595-4f9a-ad5d-1131acd951c7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.344417 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.344458 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/71300f1ee2a8cff2dcea612b02795bd22bb9b4f3ccfc60fa5e061f401c587a7e/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.360607 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz5s9\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-kube-api-access-nz5s9\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.364564 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.374470 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.374477 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" event={"ID":"6cc8350a-ea00-40a2-915f-3337cd27c244","Type":"ContainerStarted","Data":"865155810d7188c8e3ffcb113e595a3850926f1ad6555bd27c31c0dc7f0ae553"} Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.376631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.378380 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.394224 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.394506 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.394708 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.394885 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.395515 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.395706 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4glfc" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.404464 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.418815 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7d78q"] Dec 10 14:54:03 crc kubenswrapper[4727]: W1210 14:54:03.435707 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d6fded7_3029_48a5_96b8_6f8296acd34c.slice/crio-fde93f1dc5820c19bc780299ace0a04e07047ac792f8376597ae0b1f4dea997d WatchSource:0}: Error finding container fde93f1dc5820c19bc780299ace0a04e07047ac792f8376597ae0b1f4dea997d: Status 404 returned error can't find the container with id fde93f1dc5820c19bc780299ace0a04e07047ac792f8376597ae0b1f4dea997d Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.529708 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.529774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8216a031-5caf-4b21-9613-c798dd35dfb7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.529798 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.529817 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.529848 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8216a031-5caf-4b21-9613-c798dd35dfb7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.530007 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.530109 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.530156 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.530193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.530235 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnp6q\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-kube-api-access-bnp6q\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.530410 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.631724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.632125 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.632260 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnp6q\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-kube-api-access-bnp6q\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.632474 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.632646 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.632709 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.632963 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.633083 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8216a031-5caf-4b21-9613-c798dd35dfb7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.633209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.633323 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.633446 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8216a031-5caf-4b21-9613-c798dd35dfb7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.633617 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.634793 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.633102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.635880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.637186 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8216a031-5caf-4b21-9613-c798dd35dfb7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.637244 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.637266 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1dbed145fca7d880429c0190be0a1203fa2f5dbc05f2cc520d9aa531cc00aeeb/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.646431 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.650179 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.652736 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnp6q\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-kube-api-access-bnp6q\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.653394 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8216a031-5caf-4b21-9613-c798dd35dfb7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.658151 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.677625 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:03 crc kubenswrapper[4727]: I1210 14:54:03.780199 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.236754 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-config-data\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.242300 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.263214 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"rabbitmq-server-0\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.344578 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.383997 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" event={"ID":"0d6fded7-3029-48a5-96b8-6f8296acd34c","Type":"ContainerStarted","Data":"fde93f1dc5820c19bc780299ace0a04e07047ac792f8376597ae0b1f4dea997d"} Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.385501 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8216a031-5caf-4b21-9613-c798dd35dfb7","Type":"ContainerStarted","Data":"adabb6c3cdc1b7cc67865cfdb6650faa2826c10c322522f62e89b9f92da846c6"} Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.631845 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.634961 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.639207 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.641798 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-sdb2l" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.642102 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.642276 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.644291 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.647937 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.756433 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0273aa7-a359-4d67-9c86-7920c5d69e11-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.756534 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0273aa7-a359-4d67-9c86-7920c5d69e11-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.756579 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f0273aa7-a359-4d67-9c86-7920c5d69e11-kolla-config\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.756609 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f0273aa7-a359-4d67-9c86-7920c5d69e11-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.756643 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrsh\" (UniqueName: \"kubernetes.io/projected/f0273aa7-a359-4d67-9c86-7920c5d69e11-kube-api-access-fsrsh\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.756681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0273aa7-a359-4d67-9c86-7920c5d69e11-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.756715 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f0273aa7-a359-4d67-9c86-7920c5d69e11-config-data-default\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.756755 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f2e7d74-ea58-48b3-b77a-a4a5d703ce7c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f2e7d74-ea58-48b3-b77a-a4a5d703ce7c\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.858662 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0273aa7-a359-4d67-9c86-7920c5d69e11-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.858735 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f0273aa7-a359-4d67-9c86-7920c5d69e11-kolla-config\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.858772 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f0273aa7-a359-4d67-9c86-7920c5d69e11-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.858798 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsrsh\" (UniqueName: \"kubernetes.io/projected/f0273aa7-a359-4d67-9c86-7920c5d69e11-kube-api-access-fsrsh\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.858827 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0273aa7-a359-4d67-9c86-7920c5d69e11-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.858863 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f0273aa7-a359-4d67-9c86-7920c5d69e11-config-data-default\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.858918 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f2e7d74-ea58-48b3-b77a-a4a5d703ce7c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f2e7d74-ea58-48b3-b77a-a4a5d703ce7c\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.858951 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0273aa7-a359-4d67-9c86-7920c5d69e11-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.859522 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f0273aa7-a359-4d67-9c86-7920c5d69e11-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.860750 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f0273aa7-a359-4d67-9c86-7920c5d69e11-kolla-config\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.860889 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f0273aa7-a359-4d67-9c86-7920c5d69e11-config-data-default\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.860882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0273aa7-a359-4d67-9c86-7920c5d69e11-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.863003 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.863051 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f2e7d74-ea58-48b3-b77a-a4a5d703ce7c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f2e7d74-ea58-48b3-b77a-a4a5d703ce7c\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/155b5063eb17161fa999043e0158bf62835ef22aa5064a9c18d190b72942fe6e/globalmount\"" pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.873955 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0273aa7-a359-4d67-9c86-7920c5d69e11-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.886585 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0273aa7-a359-4d67-9c86-7920c5d69e11-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.903001 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f2e7d74-ea58-48b3-b77a-a4a5d703ce7c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f2e7d74-ea58-48b3-b77a-a4a5d703ce7c\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.912273 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsrsh\" (UniqueName: \"kubernetes.io/projected/f0273aa7-a359-4d67-9c86-7920c5d69e11-kube-api-access-fsrsh\") pod \"openstack-galera-0\" (UID: \"f0273aa7-a359-4d67-9c86-7920c5d69e11\") " pod="openstack/openstack-galera-0" Dec 10 14:54:04 crc kubenswrapper[4727]: I1210 14:54:04.971718 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 10 14:54:05 crc kubenswrapper[4727]: I1210 14:54:05.972027 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 14:54:05 crc kubenswrapper[4727]: I1210 14:54:05.974098 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:05 crc kubenswrapper[4727]: I1210 14:54:05.977501 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 10 14:54:05 crc kubenswrapper[4727]: I1210 14:54:05.977683 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-z6mw2" Dec 10 14:54:05 crc kubenswrapper[4727]: I1210 14:54:05.977891 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 10 14:54:05 crc kubenswrapper[4727]: I1210 14:54:05.977895 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 10 14:54:05 crc kubenswrapper[4727]: I1210 14:54:05.981391 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.091690 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e2f39a-c206-4375-bea1-db945f0b3003-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.091783 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh5t8\" (UniqueName: \"kubernetes.io/projected/e1e2f39a-c206-4375-bea1-db945f0b3003-kube-api-access-xh5t8\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.091805 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1e2f39a-c206-4375-bea1-db945f0b3003-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.091840 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e2f39a-c206-4375-bea1-db945f0b3003-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.091942 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1e2f39a-c206-4375-bea1-db945f0b3003-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.091985 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e2f39a-c206-4375-bea1-db945f0b3003-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.092013 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a295b337-41c8-4c84-8241-da06620f26f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a295b337-41c8-4c84-8241-da06620f26f3\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.092032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1e2f39a-c206-4375-bea1-db945f0b3003-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.193933 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e2f39a-c206-4375-bea1-db945f0b3003-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.193995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh5t8\" (UniqueName: \"kubernetes.io/projected/e1e2f39a-c206-4375-bea1-db945f0b3003-kube-api-access-xh5t8\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.194054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1e2f39a-c206-4375-bea1-db945f0b3003-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.194109 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e2f39a-c206-4375-bea1-db945f0b3003-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.194172 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1e2f39a-c206-4375-bea1-db945f0b3003-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.194228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e2f39a-c206-4375-bea1-db945f0b3003-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.194256 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a295b337-41c8-4c84-8241-da06620f26f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a295b337-41c8-4c84-8241-da06620f26f3\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.194277 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1e2f39a-c206-4375-bea1-db945f0b3003-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.194843 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1e2f39a-c206-4375-bea1-db945f0b3003-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.195325 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1e2f39a-c206-4375-bea1-db945f0b3003-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.195324 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1e2f39a-c206-4375-bea1-db945f0b3003-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.196292 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e2f39a-c206-4375-bea1-db945f0b3003-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.197122 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.197356 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a295b337-41c8-4c84-8241-da06620f26f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a295b337-41c8-4c84-8241-da06620f26f3\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a75fa86e49dc663eab68f52cc5e91cc9f5a2774445bd46bf029cc5e148f9c176/globalmount\"" pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.203738 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e2f39a-c206-4375-bea1-db945f0b3003-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.203866 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e2f39a-c206-4375-bea1-db945f0b3003-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.233677 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh5t8\" (UniqueName: \"kubernetes.io/projected/e1e2f39a-c206-4375-bea1-db945f0b3003-kube-api-access-xh5t8\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.249116 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a295b337-41c8-4c84-8241-da06620f26f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a295b337-41c8-4c84-8241-da06620f26f3\") pod \"openstack-cell1-galera-0\" (UID: \"e1e2f39a-c206-4375-bea1-db945f0b3003\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.313030 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.414050 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.415550 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.420445 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.420657 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tg288" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.420778 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.441929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.499735 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e710c7e-8c31-487b-ade5-a403f619e489-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.499855 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e710c7e-8c31-487b-ade5-a403f619e489-config-data\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.499889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e710c7e-8c31-487b-ade5-a403f619e489-kolla-config\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.499947 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvrd\" (UniqueName: \"kubernetes.io/projected/6e710c7e-8c31-487b-ade5-a403f619e489-kube-api-access-kqvrd\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.500003 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e710c7e-8c31-487b-ade5-a403f619e489-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.601801 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e710c7e-8c31-487b-ade5-a403f619e489-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.607085 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e710c7e-8c31-487b-ade5-a403f619e489-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.607450 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e710c7e-8c31-487b-ade5-a403f619e489-config-data\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.607580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e710c7e-8c31-487b-ade5-a403f619e489-kolla-config\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.607726 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvrd\" (UniqueName: \"kubernetes.io/projected/6e710c7e-8c31-487b-ade5-a403f619e489-kube-api-access-kqvrd\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.609607 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e710c7e-8c31-487b-ade5-a403f619e489-config-data\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.610460 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e710c7e-8c31-487b-ade5-a403f619e489-kolla-config\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.612282 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e710c7e-8c31-487b-ade5-a403f619e489-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.613300 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e710c7e-8c31-487b-ade5-a403f619e489-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.631732 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvrd\" (UniqueName: \"kubernetes.io/projected/6e710c7e-8c31-487b-ade5-a403f619e489-kube-api-access-kqvrd\") pod \"memcached-0\" (UID: \"6e710c7e-8c31-487b-ade5-a403f619e489\") " pod="openstack/memcached-0" Dec 10 14:54:06 crc kubenswrapper[4727]: I1210 14:54:06.796790 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 10 14:54:08 crc kubenswrapper[4727]: I1210 14:54:08.424994 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 14:54:08 crc kubenswrapper[4727]: I1210 14:54:08.426085 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 14:54:08 crc kubenswrapper[4727]: I1210 14:54:08.429188 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-knqch" Dec 10 14:54:08 crc kubenswrapper[4727]: I1210 14:54:08.450396 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 14:54:08 crc kubenswrapper[4727]: I1210 14:54:08.549676 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msms2\" (UniqueName: \"kubernetes.io/projected/5d7bed75-52e4-4b97-830b-f2b55f222732-kube-api-access-msms2\") pod \"kube-state-metrics-0\" (UID: \"5d7bed75-52e4-4b97-830b-f2b55f222732\") " pod="openstack/kube-state-metrics-0" Dec 10 14:54:08 crc kubenswrapper[4727]: I1210 14:54:08.651285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msms2\" (UniqueName: \"kubernetes.io/projected/5d7bed75-52e4-4b97-830b-f2b55f222732-kube-api-access-msms2\") pod \"kube-state-metrics-0\" (UID: \"5d7bed75-52e4-4b97-830b-f2b55f222732\") " pod="openstack/kube-state-metrics-0" Dec 10 14:54:08 crc kubenswrapper[4727]: I1210 14:54:08.700399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msms2\" (UniqueName: \"kubernetes.io/projected/5d7bed75-52e4-4b97-830b-f2b55f222732-kube-api-access-msms2\") pod \"kube-state-metrics-0\" (UID: \"5d7bed75-52e4-4b97-830b-f2b55f222732\") " pod="openstack/kube-state-metrics-0" Dec 10 14:54:08 crc kubenswrapper[4727]: I1210 14:54:08.748848 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.407417 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.409085 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.418338 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.418389 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.418400 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.418562 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-b7lf8" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.419017 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.436611 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.465195 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/00ee5804-d85a-432e-9295-b018259dcf38-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.465294 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/00ee5804-d85a-432e-9295-b018259dcf38-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.465329 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/00ee5804-d85a-432e-9295-b018259dcf38-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.465378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/00ee5804-d85a-432e-9295-b018259dcf38-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.465414 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/00ee5804-d85a-432e-9295-b018259dcf38-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.465467 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp9xs\" (UniqueName: \"kubernetes.io/projected/00ee5804-d85a-432e-9295-b018259dcf38-kube-api-access-zp9xs\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.465506 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/00ee5804-d85a-432e-9295-b018259dcf38-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.566598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/00ee5804-d85a-432e-9295-b018259dcf38-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.566957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/00ee5804-d85a-432e-9295-b018259dcf38-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.567106 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/00ee5804-d85a-432e-9295-b018259dcf38-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.567230 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/00ee5804-d85a-432e-9295-b018259dcf38-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.567375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp9xs\" (UniqueName: \"kubernetes.io/projected/00ee5804-d85a-432e-9295-b018259dcf38-kube-api-access-zp9xs\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.567537 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/00ee5804-d85a-432e-9295-b018259dcf38-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.567666 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/00ee5804-d85a-432e-9295-b018259dcf38-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.568017 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/00ee5804-d85a-432e-9295-b018259dcf38-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.576693 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/00ee5804-d85a-432e-9295-b018259dcf38-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.577875 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/00ee5804-d85a-432e-9295-b018259dcf38-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.578625 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/00ee5804-d85a-432e-9295-b018259dcf38-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.591505 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/00ee5804-d85a-432e-9295-b018259dcf38-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.592517 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp9xs\" (UniqueName: \"kubernetes.io/projected/00ee5804-d85a-432e-9295-b018259dcf38-kube-api-access-zp9xs\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.598852 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/00ee5804-d85a-432e-9295-b018259dcf38-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"00ee5804-d85a-432e-9295-b018259dcf38\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.765053 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.909441 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.911763 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.916003 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.916223 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.916780 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-ls2tl" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.917193 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.917643 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.917886 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.931785 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.975339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm6jp\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-kube-api-access-qm6jp\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.975413 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.975450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.976471 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.976548 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.976583 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.976622 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:09 crc kubenswrapper[4727]: I1210 14:54:09.976681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.080263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.080336 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm6jp\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-kube-api-access-qm6jp\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.080453 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.080503 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.080593 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.080661 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.080709 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.080824 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.081794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.084650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.084729 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.084763 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bfabbbee530072909eb6f2445f2c3f4483c354d55e6afa92ad99487deb5d849d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.084831 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.085092 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.089439 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.096411 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.104225 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm6jp\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-kube-api-access-qm6jp\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.121965 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") pod \"prometheus-metric-storage-0\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:10 crc kubenswrapper[4727]: I1210 14:54:10.238818 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.520133 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x2fq6"] Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.521877 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.525931 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qgkqb" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.526099 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.526261 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.573695 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4tq5b"] Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.575981 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.597408 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x2fq6"] Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609312 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69a1889a-3ba4-463e-bd9a-4f417ca69280-var-run-ovn\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a1889a-3ba4-463e-bd9a-4f417ca69280-ovn-controller-tls-certs\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609434 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-var-log\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609455 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-var-run\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609469 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69a1889a-3ba4-463e-bd9a-4f417ca69280-var-log-ovn\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609496 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb42bc94-d07f-4121-8591-8f868d089a2a-scripts\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609530 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4njq6\" (UniqueName: \"kubernetes.io/projected/69a1889a-3ba4-463e-bd9a-4f417ca69280-kube-api-access-4njq6\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69a1889a-3ba4-463e-bd9a-4f417ca69280-var-run\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609603 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-etc-ovs\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609631 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-var-lib\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609648 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46pb2\" (UniqueName: \"kubernetes.io/projected/fb42bc94-d07f-4121-8591-8f868d089a2a-kube-api-access-46pb2\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609690 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a1889a-3ba4-463e-bd9a-4f417ca69280-combined-ca-bundle\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.609766 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69a1889a-3ba4-463e-bd9a-4f417ca69280-scripts\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.624991 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4tq5b"] Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.711677 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-var-lib\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.711744 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46pb2\" (UniqueName: \"kubernetes.io/projected/fb42bc94-d07f-4121-8591-8f868d089a2a-kube-api-access-46pb2\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.711791 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a1889a-3ba4-463e-bd9a-4f417ca69280-combined-ca-bundle\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.711839 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69a1889a-3ba4-463e-bd9a-4f417ca69280-scripts\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.711880 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69a1889a-3ba4-463e-bd9a-4f417ca69280-var-run-ovn\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.711930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a1889a-3ba4-463e-bd9a-4f417ca69280-ovn-controller-tls-certs\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.712002 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-var-log\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.712031 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-var-run\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.712058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69a1889a-3ba4-463e-bd9a-4f417ca69280-var-log-ovn\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.712100 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb42bc94-d07f-4121-8591-8f868d089a2a-scripts\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.712137 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4njq6\" (UniqueName: \"kubernetes.io/projected/69a1889a-3ba4-463e-bd9a-4f417ca69280-kube-api-access-4njq6\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.712173 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69a1889a-3ba4-463e-bd9a-4f417ca69280-var-run\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.712194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-etc-ovs\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.712796 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-etc-ovs\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.713025 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-var-log\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.713253 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-var-run\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.713616 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69a1889a-3ba4-463e-bd9a-4f417ca69280-var-run\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.713746 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69a1889a-3ba4-463e-bd9a-4f417ca69280-var-log-ovn\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.715577 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb42bc94-d07f-4121-8591-8f868d089a2a-scripts\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.716190 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69a1889a-3ba4-463e-bd9a-4f417ca69280-var-run-ovn\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.716561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fb42bc94-d07f-4121-8591-8f868d089a2a-var-lib\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.720585 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69a1889a-3ba4-463e-bd9a-4f417ca69280-scripts\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.736335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a1889a-3ba4-463e-bd9a-4f417ca69280-combined-ca-bundle\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.736442 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a1889a-3ba4-463e-bd9a-4f417ca69280-ovn-controller-tls-certs\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.747508 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46pb2\" (UniqueName: \"kubernetes.io/projected/fb42bc94-d07f-4121-8591-8f868d089a2a-kube-api-access-46pb2\") pod \"ovn-controller-ovs-4tq5b\" (UID: \"fb42bc94-d07f-4121-8591-8f868d089a2a\") " pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.753597 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4njq6\" (UniqueName: \"kubernetes.io/projected/69a1889a-3ba4-463e-bd9a-4f417ca69280-kube-api-access-4njq6\") pod \"ovn-controller-x2fq6\" (UID: \"69a1889a-3ba4-463e-bd9a-4f417ca69280\") " pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.843352 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2fq6" Dec 10 14:54:11 crc kubenswrapper[4727]: I1210 14:54:11.914038 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.447849 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.449553 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.452388 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.452732 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8dljb" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.452932 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.453065 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.453161 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.463763 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.654403 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a122a79-dced-4afa-bb4c-4c6cd806770a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.654476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9e524e88-ef08-4576-b59f-110a359fff29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e524e88-ef08-4576-b59f-110a359fff29\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.654502 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a122a79-dced-4afa-bb4c-4c6cd806770a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.654527 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a122a79-dced-4afa-bb4c-4c6cd806770a-config\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.654555 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a122a79-dced-4afa-bb4c-4c6cd806770a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.654592 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a122a79-dced-4afa-bb4c-4c6cd806770a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.654615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a122a79-dced-4afa-bb4c-4c6cd806770a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.654657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xq5p\" (UniqueName: \"kubernetes.io/projected/3a122a79-dced-4afa-bb4c-4c6cd806770a-kube-api-access-6xq5p\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.755704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a122a79-dced-4afa-bb4c-4c6cd806770a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.755767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9e524e88-ef08-4576-b59f-110a359fff29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e524e88-ef08-4576-b59f-110a359fff29\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.755793 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a122a79-dced-4afa-bb4c-4c6cd806770a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.755812 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a122a79-dced-4afa-bb4c-4c6cd806770a-config\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.756572 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a122a79-dced-4afa-bb4c-4c6cd806770a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.756621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a122a79-dced-4afa-bb4c-4c6cd806770a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.756651 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a122a79-dced-4afa-bb4c-4c6cd806770a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.756720 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xq5p\" (UniqueName: \"kubernetes.io/projected/3a122a79-dced-4afa-bb4c-4c6cd806770a-kube-api-access-6xq5p\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.756997 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a122a79-dced-4afa-bb4c-4c6cd806770a-config\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.758485 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.758514 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9e524e88-ef08-4576-b59f-110a359fff29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e524e88-ef08-4576-b59f-110a359fff29\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7ff015b8746b41b4f325f2fd5ef6aa5d18dbf73afaec429e86e742ab2585e03a/globalmount\"" pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.760028 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a122a79-dced-4afa-bb4c-4c6cd806770a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.762889 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a122a79-dced-4afa-bb4c-4c6cd806770a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.767390 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a122a79-dced-4afa-bb4c-4c6cd806770a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.773373 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xq5p\" (UniqueName: \"kubernetes.io/projected/3a122a79-dced-4afa-bb4c-4c6cd806770a-kube-api-access-6xq5p\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.792950 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9e524e88-ef08-4576-b59f-110a359fff29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e524e88-ef08-4576-b59f-110a359fff29\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.831212 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a122a79-dced-4afa-bb4c-4c6cd806770a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:13 crc kubenswrapper[4727]: I1210 14:54:13.831240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a122a79-dced-4afa-bb4c-4c6cd806770a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3a122a79-dced-4afa-bb4c-4c6cd806770a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:14 crc kubenswrapper[4727]: I1210 14:54:14.295186 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.077730 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.081407 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.083462 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wsdvn" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.084514 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.084791 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.085178 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.088875 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.227531 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.227608 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbttw\" (UniqueName: \"kubernetes.io/projected/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-kube-api-access-mbttw\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.227669 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c2fcdaa8-8320-4b4c-b1b2-8f8409f57e84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2fcdaa8-8320-4b4c-b1b2-8f8409f57e84\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.227733 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.227791 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.227816 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.227844 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.227902 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-config\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.329602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-config\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.329678 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.329708 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbttw\" (UniqueName: \"kubernetes.io/projected/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-kube-api-access-mbttw\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.329748 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c2fcdaa8-8320-4b4c-b1b2-8f8409f57e84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2fcdaa8-8320-4b4c-b1b2-8f8409f57e84\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.329796 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.329837 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.329852 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.329870 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.332627 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.332832 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-config\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.335686 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.335775 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c2fcdaa8-8320-4b4c-b1b2-8f8409f57e84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2fcdaa8-8320-4b4c-b1b2-8f8409f57e84\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3182c94080c3a783e4e4ecd39e8fe453966cc3518c3520cf89fc543c9822c262/globalmount\"" pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.336069 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.336631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.337022 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.337368 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.363225 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbttw\" (UniqueName: \"kubernetes.io/projected/7b92e7c2-a91e-4b8d-9316-ea7fc4e90188-kube-api-access-mbttw\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.389528 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c2fcdaa8-8320-4b4c-b1b2-8f8409f57e84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2fcdaa8-8320-4b4c-b1b2-8f8409f57e84\") pod \"ovsdbserver-sb-0\" (UID: \"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:17 crc kubenswrapper[4727]: I1210 14:54:17.407760 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.474290 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-hs594"] Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.479020 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.482501 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.482562 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.483361 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-gjks5" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.483679 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.483844 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.522816 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.523005 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.523251 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95qb2\" (UniqueName: \"kubernetes.io/projected/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-kube-api-access-95qb2\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.523328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.523446 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.535975 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-hs594"] Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.628145 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.628260 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.628418 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95qb2\" (UniqueName: \"kubernetes.io/projected/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-kube-api-access-95qb2\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.628469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.628535 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.632258 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.636206 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.645586 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.659593 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.669926 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-9776w"] Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.676161 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.682591 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.683128 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.683292 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.708945 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-9776w"] Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.729318 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95qb2\" (UniqueName: \"kubernetes.io/projected/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-kube-api-access-95qb2\") pod \"cloudkitty-lokistack-distributor-664b687b54-hs594\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.730285 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpsx4\" (UniqueName: \"kubernetes.io/projected/2375881f-bb87-45f5-83a1-a314f445945d-kube-api-access-lpsx4\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.730339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.730376 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.730415 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.730436 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.730490 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.780422 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh"] Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.781942 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.785557 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.789523 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.810248 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh"] Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.835083 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpsx4\" (UniqueName: \"kubernetes.io/projected/2375881f-bb87-45f5-83a1-a314f445945d-kube-api-access-lpsx4\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.835170 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.835204 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.835256 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.835287 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.835359 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.837184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.837364 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.837499 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.842613 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.843687 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.877008 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpsx4\" (UniqueName: \"kubernetes.io/projected/2375881f-bb87-45f5-83a1-a314f445945d-kube-api-access-lpsx4\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.885875 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-9776w\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.945673 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.945765 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.945824 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.945863 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vdr\" (UniqueName: \"kubernetes.io/projected/382fdc05-9257-41ca-a032-1dc84b1483e3-kube-api-access-52vdr\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.945894 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.973244 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf"] Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.974943 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.979407 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.979741 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.980019 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.980282 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.980580 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.980802 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.992044 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv"] Dec 10 14:54:19 crc kubenswrapper[4727]: I1210 14:54:19.993783 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.001273 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-vsbxh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.007028 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf"] Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.021339 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv"] Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047444 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047519 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52vdr\" (UniqueName: \"kubernetes.io/projected/382fdc05-9257-41ca-a032-1dc84b1483e3-kube-api-access-52vdr\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047565 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047603 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047642 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047666 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047689 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047714 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047732 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grkds\" (UniqueName: \"kubernetes.io/projected/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-kube-api-access-grkds\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047748 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047771 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047792 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047819 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047843 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047860 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.047894 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.048040 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.048063 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jbks\" (UniqueName: \"kubernetes.io/projected/26378e88-9d49-423c-b1a9-91a70f08b760-kube-api-access-2jbks\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.048081 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.048103 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.048130 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.049017 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.052045 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.059713 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.060423 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.076546 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.077581 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52vdr\" (UniqueName: \"kubernetes.io/projected/382fdc05-9257-41ca-a032-1dc84b1483e3-kube-api-access-52vdr\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.109058 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150197 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150251 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150280 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150306 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150323 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grkds\" (UniqueName: \"kubernetes.io/projected/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-kube-api-access-grkds\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150345 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150368 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150388 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150432 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150456 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150475 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150517 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150546 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jbks\" (UniqueName: \"kubernetes.io/projected/26378e88-9d49-423c-b1a9-91a70f08b760-kube-api-access-2jbks\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150609 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150647 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150670 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.150720 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.156803 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: E1210 14:54:20.156990 4727 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Dec 10 14:54:20 crc kubenswrapper[4727]: E1210 14:54:20.157076 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tls-secret podName:26378e88-9d49-423c-b1a9-91a70f08b760 nodeName:}" failed. No retries permitted until 2025-12-10 14:54:20.657024413 +0000 UTC m=+1364.851798955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tls-secret") pod "cloudkitty-lokistack-gateway-bc75944f-ltztv" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760") : secret "cloudkitty-lokistack-gateway-http" not found Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.159091 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.160838 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.163434 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.163939 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.164495 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.166425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.169632 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.170309 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.170494 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.170598 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.170619 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.171102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.172805 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.176953 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.185216 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jbks\" (UniqueName: \"kubernetes.io/projected/26378e88-9d49-423c-b1a9-91a70f08b760-kube-api-access-2jbks\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.191122 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grkds\" (UniqueName: \"kubernetes.io/projected/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-kube-api-access-grkds\") pod \"cloudkitty-lokistack-gateway-bc75944f-kvpwf\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.313353 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.621978 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.623435 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.626788 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.626856 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.646076 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.659484 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.668186 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-ltztv\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.761708 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.761767 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.761827 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.761858 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.761950 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7tb\" (UniqueName: \"kubernetes.io/projected/f429b94a-d632-41b7-85c3-584f6dfe4475-kube-api-access-sx7tb\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.762018 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.762052 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.762073 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.791442 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.792851 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.798755 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.804641 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.824022 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.955305 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.958220 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.958312 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.958363 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.958420 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.958466 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.958559 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.959066 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.960513 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.960619 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.960773 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.960821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.960858 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.963398 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.971285 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.971484 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkrcx\" (UniqueName: \"kubernetes.io/projected/cc5506b0-8e70-49c6-8783-5802aca6f72e-kube-api-access-dkrcx\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.971544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.971598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx7tb\" (UniqueName: \"kubernetes.io/projected/f429b94a-d632-41b7-85c3-584f6dfe4475-kube-api-access-sx7tb\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.971736 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.972356 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.973683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:20 crc kubenswrapper[4727]: I1210 14:54:20.993727 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.000065 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.002818 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.037991 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.039476 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.047880 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.048180 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.051433 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx7tb\" (UniqueName: \"kubernetes.io/projected/f429b94a-d632-41b7-85c3-584f6dfe4475-kube-api-access-sx7tb\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.077038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.077103 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.077179 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.077231 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.077274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.077308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkrcx\" (UniqueName: \"kubernetes.io/projected/cc5506b0-8e70-49c6-8783-5802aca6f72e-kube-api-access-dkrcx\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.077341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.078224 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.079795 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.083206 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.085617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.087495 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.087610 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.091698 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.103489 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.115132 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkrcx\" (UniqueName: \"kubernetes.io/projected/cc5506b0-8e70-49c6-8783-5802aca6f72e-kube-api-access-dkrcx\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.132991 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.178806 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.178899 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.178954 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.178978 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.179007 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8cc4\" (UniqueName: \"kubernetes.io/projected/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-kube-api-access-d8cc4\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.179330 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.179517 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.281276 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.281355 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.281383 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.281481 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.281515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8cc4\" (UniqueName: \"kubernetes.io/projected/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-kube-api-access-d8cc4\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.281593 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.281691 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.282218 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.282868 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.283000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.286026 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.286363 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.286994 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.302173 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8cc4\" (UniqueName: \"kubernetes.io/projected/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-kube-api-access-d8cc4\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.310222 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.369552 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.414325 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:54:21 crc kubenswrapper[4727]: I1210 14:54:21.415415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:54:26 crc kubenswrapper[4727]: E1210 14:54:26.171649 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 10 14:54:26 crc kubenswrapper[4727]: E1210 14:54:26.172424 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7rbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-wkmqx_openstack(5e99022a-2268-4c86-8192-9af3f5abf6e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:54:26 crc kubenswrapper[4727]: E1210 14:54:26.173600 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" podUID="5e99022a-2268-4c86-8192-9af3f5abf6e5" Dec 10 14:54:26 crc kubenswrapper[4727]: E1210 14:54:26.200559 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 10 14:54:26 crc kubenswrapper[4727]: E1210 14:54:26.200770 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vtk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-m4jlc_openstack(a229e6d8-d8d6-40bd-9788-8ba7c86602b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:54:26 crc kubenswrapper[4727]: E1210 14:54:26.201962 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" podUID="a229e6d8-d8d6-40bd-9788-8ba7c86602b4" Dec 10 14:54:27 crc kubenswrapper[4727]: E1210 14:54:27.748305 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 10 14:54:27 crc kubenswrapper[4727]: E1210 14:54:27.750800 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnp6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(8216a031-5caf-4b21-9613-c798dd35dfb7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:54:27 crc kubenswrapper[4727]: E1210 14:54:27.752019 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8216a031-5caf-4b21-9613-c798dd35dfb7" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.076423 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.077407 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.250574 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-config\") pod \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.250756 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e99022a-2268-4c86-8192-9af3f5abf6e5-config\") pod \"5e99022a-2268-4c86-8192-9af3f5abf6e5\" (UID: \"5e99022a-2268-4c86-8192-9af3f5abf6e5\") " Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.250854 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7rbx\" (UniqueName: \"kubernetes.io/projected/5e99022a-2268-4c86-8192-9af3f5abf6e5-kube-api-access-p7rbx\") pod \"5e99022a-2268-4c86-8192-9af3f5abf6e5\" (UID: \"5e99022a-2268-4c86-8192-9af3f5abf6e5\") " Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.250896 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vtk9\" (UniqueName: \"kubernetes.io/projected/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-kube-api-access-7vtk9\") pod \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.250950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-dns-svc\") pod \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\" (UID: \"a229e6d8-d8d6-40bd-9788-8ba7c86602b4\") " Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.252275 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a229e6d8-d8d6-40bd-9788-8ba7c86602b4" (UID: "a229e6d8-d8d6-40bd-9788-8ba7c86602b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.252775 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-config" (OuterVolumeSpecName: "config") pod "a229e6d8-d8d6-40bd-9788-8ba7c86602b4" (UID: "a229e6d8-d8d6-40bd-9788-8ba7c86602b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.253401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e99022a-2268-4c86-8192-9af3f5abf6e5-config" (OuterVolumeSpecName: "config") pod "5e99022a-2268-4c86-8192-9af3f5abf6e5" (UID: "5e99022a-2268-4c86-8192-9af3f5abf6e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.258530 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e99022a-2268-4c86-8192-9af3f5abf6e5-kube-api-access-p7rbx" (OuterVolumeSpecName: "kube-api-access-p7rbx") pod "5e99022a-2268-4c86-8192-9af3f5abf6e5" (UID: "5e99022a-2268-4c86-8192-9af3f5abf6e5"). InnerVolumeSpecName "kube-api-access-p7rbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.264350 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-kube-api-access-7vtk9" (OuterVolumeSpecName: "kube-api-access-7vtk9") pod "a229e6d8-d8d6-40bd-9788-8ba7c86602b4" (UID: "a229e6d8-d8d6-40bd-9788-8ba7c86602b4"). InnerVolumeSpecName "kube-api-access-7vtk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.353592 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vtk9\" (UniqueName: \"kubernetes.io/projected/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-kube-api-access-7vtk9\") on node \"crc\" DevicePath \"\"" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.353658 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.353671 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a229e6d8-d8d6-40bd-9788-8ba7c86602b4-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.353681 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e99022a-2268-4c86-8192-9af3f5abf6e5-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.353692 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7rbx\" (UniqueName: \"kubernetes.io/projected/5e99022a-2268-4c86-8192-9af3f5abf6e5-kube-api-access-p7rbx\") on node \"crc\" DevicePath \"\"" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.593892 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x2fq6"] Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.691777 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" event={"ID":"a229e6d8-d8d6-40bd-9788-8ba7c86602b4","Type":"ContainerDied","Data":"269460f6b21e7e224a87508daeb98d1ee1a7c38fb94be77d7dda2cc098223705"} Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.691875 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m4jlc" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.695749 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" Dec 10 14:54:28 crc kubenswrapper[4727]: E1210 14:54:28.700255 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8216a031-5caf-4b21-9613-c798dd35dfb7" Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.701031 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wkmqx" event={"ID":"5e99022a-2268-4c86-8192-9af3f5abf6e5","Type":"ContainerDied","Data":"958178e72e12e42cdf096a0dbeadc3064e83f5b23ff914c1dc544a84ab69c21e"} Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.810393 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m4jlc"] Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.829306 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m4jlc"] Dec 10 14:54:28 crc kubenswrapper[4727]: I1210 14:54:28.984594 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wkmqx"] Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.004564 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wkmqx"] Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.029210 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.194009 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.213215 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 10 14:54:29 crc kubenswrapper[4727]: W1210 14:54:29.240112 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0273aa7_a359_4d67_9c86_7920c5d69e11.slice/crio-ca12605d31cc50587dba00dc9d8496e6446bb0ae733dcf2d18357492b622deb7 WatchSource:0}: Error finding container ca12605d31cc50587dba00dc9d8496e6446bb0ae733dcf2d18357492b622deb7: Status 404 returned error can't find the container with id ca12605d31cc50587dba00dc9d8496e6446bb0ae733dcf2d18357492b622deb7 Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.291615 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.597575 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.605439 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv"] Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.620674 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 14:54:29 crc kubenswrapper[4727]: W1210 14:54:29.627456 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e2f39a_c206_4375_bea1_db945f0b3003.slice/crio-3946dca35c4f333bcb84b7a26c5ab57629db41bc4a2121c4e515a21cc2313ae9 WatchSource:0}: Error finding container 3946dca35c4f333bcb84b7a26c5ab57629db41bc4a2121c4e515a21cc2313ae9: Status 404 returned error can't find the container with id 3946dca35c4f333bcb84b7a26c5ab57629db41bc4a2121c4e515a21cc2313ae9 Dec 10 14:54:29 crc kubenswrapper[4727]: W1210 14:54:29.628577 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28ce859f_f595_4f9a_ad5d_1131acd951c7.slice/crio-f5be6253d1674772a5c7278e5b4443d1a18afefcc8a9b33c205c30ab8a752acf WatchSource:0}: Error finding container f5be6253d1674772a5c7278e5b4443d1a18afefcc8a9b33c205c30ab8a752acf: Status 404 returned error can't find the container with id f5be6253d1674772a5c7278e5b4443d1a18afefcc8a9b33c205c30ab8a752acf Dec 10 14:54:29 crc kubenswrapper[4727]: W1210 14:54:29.637946 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98542d0d_88a6_49f6_a86e_36a5ba2d7b18.slice/crio-9de831fd4dea4cc60bf2c740b29026f20c8a3c40dcfa6973be9c5d46a3330dc3 WatchSource:0}: Error finding container 9de831fd4dea4cc60bf2c740b29026f20c8a3c40dcfa6973be9c5d46a3330dc3: Status 404 returned error can't find the container with id 9de831fd4dea4cc60bf2c740b29026f20c8a3c40dcfa6973be9c5d46a3330dc3 Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.644184 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.653645 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf"] Dec 10 14:54:29 crc kubenswrapper[4727]: W1210 14:54:29.740455 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b92e7c2_a91e_4b8d_9316_ea7fc4e90188.slice/crio-f7a7585fe233ceccad62dc8af07662ae9ee1d150513648bdf6bb87cc3ef5c5a9 WatchSource:0}: Error finding container f7a7585fe233ceccad62dc8af07662ae9ee1d150513648bdf6bb87cc3ef5c5a9: Status 404 returned error can't find the container with id f7a7585fe233ceccad62dc8af07662ae9ee1d150513648bdf6bb87cc3ef5c5a9 Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.741976 4727 generic.go:334] "Generic (PLEG): container finished" podID="0d6fded7-3029-48a5-96b8-6f8296acd34c" containerID="37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2" exitCode=0 Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.742101 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" event={"ID":"0d6fded7-3029-48a5-96b8-6f8296acd34c","Type":"ContainerDied","Data":"37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.748247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerStarted","Data":"2980d71c9993d0d78dcebf5a400616640a7dc840609dad61b7bcab33576bf477"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.752786 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3a122a79-dced-4afa-bb4c-4c6cd806770a","Type":"ContainerStarted","Data":"760314d8ced6f82b1097f7b9ce63f2f9342f9a7e4c2da53d71df59c7108a16bd"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.757956 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"00ee5804-d85a-432e-9295-b018259dcf38","Type":"ContainerStarted","Data":"9782fbb241b30d423b458d0e99f04f4d582178b425084f8b945e4cd212d655a9"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.774649 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.776837 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6e710c7e-8c31-487b-ade5-a403f619e489","Type":"ContainerStarted","Data":"f55cca8b1d746a15aed9564a0c3abd0ede65ea46755f1e0836e5a6b5a5576479"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.782521 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28ce859f-f595-4f9a-ad5d-1131acd951c7","Type":"ContainerStarted","Data":"f5be6253d1674772a5c7278e5b4443d1a18afefcc8a9b33c205c30ab8a752acf"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.799191 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e1e2f39a-c206-4375-bea1-db945f0b3003","Type":"ContainerStarted","Data":"3946dca35c4f333bcb84b7a26c5ab57629db41bc4a2121c4e515a21cc2313ae9"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.803207 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f0273aa7-a359-4d67-9c86-7920c5d69e11","Type":"ContainerStarted","Data":"ca12605d31cc50587dba00dc9d8496e6446bb0ae733dcf2d18357492b622deb7"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.811897 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" event={"ID":"26378e88-9d49-423c-b1a9-91a70f08b760","Type":"ContainerStarted","Data":"2d949001bd031eb276affe2877221817fc8ed3a38dfcd7146e7dd30ff47a5f12"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.814562 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" event={"ID":"98542d0d-88a6-49f6-a86e-36a5ba2d7b18","Type":"ContainerStarted","Data":"9de831fd4dea4cc60bf2c740b29026f20c8a3c40dcfa6973be9c5d46a3330dc3"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.817880 4727 generic.go:334] "Generic (PLEG): container finished" podID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerID="3e0e9f17be973f8a24944d09e3127d5751d7e84ca0b391a68db1a16b4cf62656" exitCode=0 Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.818014 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" event={"ID":"6cc8350a-ea00-40a2-915f-3337cd27c244","Type":"ContainerDied","Data":"3e0e9f17be973f8a24944d09e3127d5751d7e84ca0b391a68db1a16b4cf62656"} Dec 10 14:54:29 crc kubenswrapper[4727]: I1210 14:54:29.821682 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2fq6" event={"ID":"69a1889a-3ba4-463e-bd9a-4f417ca69280","Type":"ContainerStarted","Data":"b917850cd9e3b8ca13ab9fa9d66e4f95d5370de25c3e454a826eb9a3062802ca"} Dec 10 14:54:30 crc kubenswrapper[4727]: E1210 14:54:30.085125 4727 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 10 14:54:30 crc kubenswrapper[4727]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/6cc8350a-ea00-40a2-915f-3337cd27c244/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 10 14:54:30 crc kubenswrapper[4727]: > podSandboxID="865155810d7188c8e3ffcb113e595a3850926f1ad6555bd27c31c0dc7f0ae553" Dec 10 14:54:30 crc kubenswrapper[4727]: E1210 14:54:30.085704 4727 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 10 14:54:30 crc kubenswrapper[4727]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lkxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-fctzl_openstack(6cc8350a-ea00-40a2-915f-3337cd27c244): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/6cc8350a-ea00-40a2-915f-3337cd27c244/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 10 14:54:30 crc kubenswrapper[4727]: > logger="UnhandledError" Dec 10 14:54:30 crc kubenswrapper[4727]: E1210 14:54:30.087277 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/6cc8350a-ea00-40a2-915f-3337cd27c244/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.232299 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh"] Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.240973 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.259494 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-hs594"] Dec 10 14:54:30 crc kubenswrapper[4727]: W1210 14:54:30.279706 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod382fdc05_9257_41ca_a032_1dc84b1483e3.slice/crio-8213a2822c6d973b3cefa0b0a672b5be79289d53187435c3f776381f061b7dd5 WatchSource:0}: Error finding container 8213a2822c6d973b3cefa0b0a672b5be79289d53187435c3f776381f061b7dd5: Status 404 returned error can't find the container with id 8213a2822c6d973b3cefa0b0a672b5be79289d53187435c3f776381f061b7dd5 Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.298943 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.310617 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-9776w"] Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.318610 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.330172 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 14:54:30 crc kubenswrapper[4727]: W1210 14:54:30.348648 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc5506b0_8e70_49c6_8783_5802aca6f72e.slice/crio-2050ae5f93f59179a7cdec5a626eb7c7e19585a2c2cf3437e03ff56454caa240 WatchSource:0}: Error finding container 2050ae5f93f59179a7cdec5a626eb7c7e19585a2c2cf3437e03ff56454caa240: Status 404 returned error can't find the container with id 2050ae5f93f59179a7cdec5a626eb7c7e19585a2c2cf3437e03ff56454caa240 Dec 10 14:54:30 crc kubenswrapper[4727]: W1210 14:54:30.355776 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b5391a_f4f5_4bda_b1d6_0e7edd1b2ec0.slice/crio-8fcf7b0e64d9d676322bc24cfdf7f4918a4afd36de5132238b88a25a2ffc961e WatchSource:0}: Error finding container 8fcf7b0e64d9d676322bc24cfdf7f4918a4afd36de5132238b88a25a2ffc961e: Status 404 returned error can't find the container with id 8fcf7b0e64d9d676322bc24cfdf7f4918a4afd36de5132238b88a25a2ffc961e Dec 10 14:54:30 crc kubenswrapper[4727]: W1210 14:54:30.359064 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2375881f_bb87_45f5_83a1_a314f445945d.slice/crio-14b73f1b4dc02c25006b416dcdeec57aa57e0a26624e26a75fc48fe77ea6bdc1 WatchSource:0}: Error finding container 14b73f1b4dc02c25006b416dcdeec57aa57e0a26624e26a75fc48fe77ea6bdc1: Status 404 returned error can't find the container with id 14b73f1b4dc02c25006b416dcdeec57aa57e0a26624e26a75fc48fe77ea6bdc1 Dec 10 14:54:30 crc kubenswrapper[4727]: E1210 14:54:30.362068 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpsx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-querier-5467947bf7-9776w_openstack(2375881f-bb87-45f5-83a1-a314f445945d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:54:30 crc kubenswrapper[4727]: E1210 14:54:30.363234 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" podUID="2375881f-bb87-45f5-83a1-a314f445945d" Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.469723 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4tq5b"] Dec 10 14:54:30 crc kubenswrapper[4727]: W1210 14:54:30.485543 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb42bc94_d07f_4121_8591_8f868d089a2a.slice/crio-e2d5af0f87c8d0715cc53064e13880f3f31edfd0b930517321e7f16ac280bd36 WatchSource:0}: Error finding container e2d5af0f87c8d0715cc53064e13880f3f31edfd0b930517321e7f16ac280bd36: Status 404 returned error can't find the container with id e2d5af0f87c8d0715cc53064e13880f3f31edfd0b930517321e7f16ac280bd36 Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.579965 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e99022a-2268-4c86-8192-9af3f5abf6e5" path="/var/lib/kubelet/pods/5e99022a-2268-4c86-8192-9af3f5abf6e5/volumes" Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.580947 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a229e6d8-d8d6-40bd-9788-8ba7c86602b4" path="/var/lib/kubelet/pods/a229e6d8-d8d6-40bd-9788-8ba7c86602b4/volumes" Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.846232 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"f429b94a-d632-41b7-85c3-584f6dfe4475","Type":"ContainerStarted","Data":"78c5bd88a87496c0ebd76d36ae8ca963447c07f6168d78c7e5183a605b4c35af"} Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.850301 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" event={"ID":"0d6fded7-3029-48a5-96b8-6f8296acd34c","Type":"ContainerStarted","Data":"236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23"} Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.850634 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.851963 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4tq5b" event={"ID":"fb42bc94-d07f-4121-8591-8f868d089a2a","Type":"ContainerStarted","Data":"e2d5af0f87c8d0715cc53064e13880f3f31edfd0b930517321e7f16ac280bd36"} Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.854507 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" event={"ID":"b9128ddd-7fb4-41b8-a218-2201b9ef57ad","Type":"ContainerStarted","Data":"690c140dde3e1ede7a72b4f56c41d16520655fc826e19f49bad987160c8ae030"} Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.857959 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0","Type":"ContainerStarted","Data":"8fcf7b0e64d9d676322bc24cfdf7f4918a4afd36de5132238b88a25a2ffc961e"} Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.860120 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"cc5506b0-8e70-49c6-8783-5802aca6f72e","Type":"ContainerStarted","Data":"2050ae5f93f59179a7cdec5a626eb7c7e19585a2c2cf3437e03ff56454caa240"} Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.861797 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d7bed75-52e4-4b97-830b-f2b55f222732","Type":"ContainerStarted","Data":"edf1a40d1d26378d7a48bfbf46741645d8fc94474c27c8ba827542f545d027cc"} Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.864110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" event={"ID":"382fdc05-9257-41ca-a032-1dc84b1483e3","Type":"ContainerStarted","Data":"8213a2822c6d973b3cefa0b0a672b5be79289d53187435c3f776381f061b7dd5"} Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.865372 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" event={"ID":"2375881f-bb87-45f5-83a1-a314f445945d","Type":"ContainerStarted","Data":"14b73f1b4dc02c25006b416dcdeec57aa57e0a26624e26a75fc48fe77ea6bdc1"} Dec 10 14:54:30 crc kubenswrapper[4727]: E1210 14:54:30.867205 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" podUID="2375881f-bb87-45f5-83a1-a314f445945d" Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.874667 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188","Type":"ContainerStarted","Data":"f7a7585fe233ceccad62dc8af07662ae9ee1d150513648bdf6bb87cc3ef5c5a9"} Dec 10 14:54:30 crc kubenswrapper[4727]: I1210 14:54:30.895447 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" podStartSLOduration=4.364074523 podStartE2EDuration="28.895417559s" podCreationTimestamp="2025-12-10 14:54:02 +0000 UTC" firstStartedPulling="2025-12-10 14:54:03.442404886 +0000 UTC m=+1347.637179418" lastFinishedPulling="2025-12-10 14:54:27.973747912 +0000 UTC m=+1372.168522454" observedRunningTime="2025-12-10 14:54:30.874632494 +0000 UTC m=+1375.069407036" watchObservedRunningTime="2025-12-10 14:54:30.895417559 +0000 UTC m=+1375.090192101" Dec 10 14:54:31 crc kubenswrapper[4727]: I1210 14:54:31.886670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" event={"ID":"6cc8350a-ea00-40a2-915f-3337cd27c244","Type":"ContainerStarted","Data":"4a916cf44e74e4aee03ab4c1560c17868a9a8b4a99103e2867b0d4c3fd87bd14"} Dec 10 14:54:31 crc kubenswrapper[4727]: I1210 14:54:31.886960 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:31 crc kubenswrapper[4727]: I1210 14:54:31.890065 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28ce859f-f595-4f9a-ad5d-1131acd951c7","Type":"ContainerStarted","Data":"8541719ee2bedad7a365e3c9dcbcc2e7fccae5a8505315d8a761a4f0bd1773e3"} Dec 10 14:54:31 crc kubenswrapper[4727]: E1210 14:54:31.890722 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" podUID="2375881f-bb87-45f5-83a1-a314f445945d" Dec 10 14:54:31 crc kubenswrapper[4727]: I1210 14:54:31.914180 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" podStartSLOduration=5.521008227 podStartE2EDuration="30.914159991s" podCreationTimestamp="2025-12-10 14:54:01 +0000 UTC" firstStartedPulling="2025-12-10 14:54:02.754001148 +0000 UTC m=+1346.948775690" lastFinishedPulling="2025-12-10 14:54:28.147152912 +0000 UTC m=+1372.341927454" observedRunningTime="2025-12-10 14:54:31.907350279 +0000 UTC m=+1376.102124841" watchObservedRunningTime="2025-12-10 14:54:31.914159991 +0000 UTC m=+1376.108934533" Dec 10 14:54:37 crc kubenswrapper[4727]: I1210 14:54:37.134169 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:37 crc kubenswrapper[4727]: I1210 14:54:37.576396 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:54:37 crc kubenswrapper[4727]: I1210 14:54:37.642419 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fctzl"] Dec 10 14:54:38 crc kubenswrapper[4727]: I1210 14:54:38.059513 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="dnsmasq-dns" containerID="cri-o://4a916cf44e74e4aee03ab4c1560c17868a9a8b4a99103e2867b0d4c3fd87bd14" gracePeriod=10 Dec 10 14:54:39 crc kubenswrapper[4727]: I1210 14:54:39.074727 4727 generic.go:334] "Generic (PLEG): container finished" podID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerID="4a916cf44e74e4aee03ab4c1560c17868a9a8b4a99103e2867b0d4c3fd87bd14" exitCode=0 Dec 10 14:54:39 crc kubenswrapper[4727]: I1210 14:54:39.074779 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" event={"ID":"6cc8350a-ea00-40a2-915f-3337cd27c244","Type":"ContainerDied","Data":"4a916cf44e74e4aee03ab4c1560c17868a9a8b4a99103e2867b0d4c3fd87bd14"} Dec 10 14:54:42 crc kubenswrapper[4727]: I1210 14:54:42.133531 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.102:5353: connect: connection refused" Dec 10 14:54:47 crc kubenswrapper[4727]: I1210 14:54:47.133929 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.102:5353: connect: connection refused" Dec 10 14:54:48 crc kubenswrapper[4727]: E1210 14:54:48.985063 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e" Dec 10 14:54:48 crc kubenswrapper[4727]: E1210 14:54:48.985688 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jbks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-bc75944f-ltztv_openstack(26378e88-9d49-423c-b1a9-91a70f08b760): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:54:48 crc kubenswrapper[4727]: E1210 14:54:48.986999 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" podUID="26378e88-9d49-423c-b1a9-91a70f08b760" Dec 10 14:54:48 crc kubenswrapper[4727]: E1210 14:54:48.997998 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e" Dec 10 14:54:48 crc kubenswrapper[4727]: E1210 14:54:48.998147 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grkds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-bc75944f-kvpwf_openstack(98542d0d-88a6-49f6-a86e-36a5ba2d7b18): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:54:48 crc kubenswrapper[4727]: E1210 14:54:48.999442 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" podUID="98542d0d-88a6-49f6-a86e-36a5ba2d7b18" Dec 10 14:54:49 crc kubenswrapper[4727]: E1210 14:54:49.161693 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" podUID="98542d0d-88a6-49f6-a86e-36a5ba2d7b18" Dec 10 14:54:49 crc kubenswrapper[4727]: E1210 14:54:49.163511 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" podUID="26378e88-9d49-423c-b1a9-91a70f08b760" Dec 10 14:54:51 crc kubenswrapper[4727]: E1210 14:54:51.228026 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 10 14:54:51 crc kubenswrapper[4727]: E1210 14:54:51.228325 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-index-gateway,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=index-gateway -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8cc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-index-gateway-0_openstack(14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:54:51 crc kubenswrapper[4727]: E1210 14:54:51.229732 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" Dec 10 14:54:52 crc kubenswrapper[4727]: I1210 14:54:52.133567 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.102:5353: connect: connection refused" Dec 10 14:54:52 crc kubenswrapper[4727]: I1210 14:54:52.134193 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:52 crc kubenswrapper[4727]: E1210 14:54:52.231169 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" Dec 10 14:54:56 crc kubenswrapper[4727]: E1210 14:54:56.270656 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 10 14:54:56 crc kubenswrapper[4727]: E1210 14:54:56.271202 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-query-frontend,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=query-frontend -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52vdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh_openstack(382fdc05-9257-41ca-a032-1dc84b1483e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:54:56 crc kubenswrapper[4727]: E1210 14:54:56.272398 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" podUID="382fdc05-9257-41ca-a032-1dc84b1483e3" Dec 10 14:54:57 crc kubenswrapper[4727]: E1210 14:54:57.053352 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 10 14:54:57 crc kubenswrapper[4727]: E1210 14:54:57.053630 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-ingester,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=ingester -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:wal,ReadOnly:false,MountPath:/tmp/wal,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sx7tb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-ingester-0_openstack(f429b94a-d632-41b7-85c3-584f6dfe4475): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:54:57 crc kubenswrapper[4727]: E1210 14:54:57.055209 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" Dec 10 14:54:57 crc kubenswrapper[4727]: E1210 14:54:57.285891 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" podUID="382fdc05-9257-41ca-a032-1dc84b1483e3" Dec 10 14:54:57 crc kubenswrapper[4727]: E1210 14:54:57.285899 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" Dec 10 14:54:57 crc kubenswrapper[4727]: E1210 14:54:57.445855 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 10 14:54:57 crc kubenswrapper[4727]: E1210 14:54:57.446172 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-compactor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=compactor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkrcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-compactor-0_openstack(cc5506b0-8e70-49c6-8783-5802aca6f72e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:54:57 crc kubenswrapper[4727]: E1210 14:54:57.447583 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.134492 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.134733 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n545h588h596h647h64fh556h5f5h64bh55fhf4hb4h569hdfh64ch5d7hfch658h669h5c5h9bh57fhb5h9bh679h64dh5fdh57bh5c6h5fbh5b8h8fh566q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqvrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(6e710c7e-8c31-487b-ade5-a403f619e489): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.136001 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="6e710c7e-8c31-487b-ade5-a403f619e489" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.192044 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.192582 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95qb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-distributor-664b687b54-hs594_openstack(b9128ddd-7fb4-41b8-a218-2201b9ef57ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.193772 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.287029 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.291395 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="6e710c7e-8c31-487b-ade5-a403f619e489" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.296561 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.721076 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Dec 10 14:54:58 crc kubenswrapper[4727]: E1210 14:54:58.721464 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n685h64dh669h64dh544h67bh5d4h55ch557h645h5bh68h579h674h5d4hc8h5b9h676h59dh645h68bhcch5bh695h5cdh655h9fh697h677h559h66dh5cbq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbttw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(7b92e7c2-a91e-4b8d-9316-ea7fc4e90188): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:54:59 crc kubenswrapper[4727]: E1210 14:54:59.421182 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 10 14:54:59 crc kubenswrapper[4727]: E1210 14:54:59.421466 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch56ch9dhc5h684h654h66fh544h598h9h58ch5ffh7bh65h554h54dh58ch58bh6bh648h58fh575h5bfh588h659h594h8ch677h96h5b5h5f8h555q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4njq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-x2fq6_openstack(69a1889a-3ba4-463e-bd9a-4f417ca69280): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:54:59 crc kubenswrapper[4727]: E1210 14:54:59.423511 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-x2fq6" podUID="69a1889a-3ba4-463e-bd9a-4f417ca69280" Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.525320 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.618632 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-config\") pod \"6cc8350a-ea00-40a2-915f-3337cd27c244\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.618693 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-dns-svc\") pod \"6cc8350a-ea00-40a2-915f-3337cd27c244\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.618822 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lkxj\" (UniqueName: \"kubernetes.io/projected/6cc8350a-ea00-40a2-915f-3337cd27c244-kube-api-access-9lkxj\") pod \"6cc8350a-ea00-40a2-915f-3337cd27c244\" (UID: \"6cc8350a-ea00-40a2-915f-3337cd27c244\") " Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.623870 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc8350a-ea00-40a2-915f-3337cd27c244-kube-api-access-9lkxj" (OuterVolumeSpecName: "kube-api-access-9lkxj") pod "6cc8350a-ea00-40a2-915f-3337cd27c244" (UID: "6cc8350a-ea00-40a2-915f-3337cd27c244"). InnerVolumeSpecName "kube-api-access-9lkxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.664326 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-config" (OuterVolumeSpecName: "config") pod "6cc8350a-ea00-40a2-915f-3337cd27c244" (UID: "6cc8350a-ea00-40a2-915f-3337cd27c244"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.670312 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cc8350a-ea00-40a2-915f-3337cd27c244" (UID: "6cc8350a-ea00-40a2-915f-3337cd27c244"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.720790 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.720831 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc8350a-ea00-40a2-915f-3337cd27c244-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:54:59 crc kubenswrapper[4727]: I1210 14:54:59.720844 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lkxj\" (UniqueName: \"kubernetes.io/projected/6cc8350a-ea00-40a2-915f-3337cd27c244-kube-api-access-9lkxj\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:00 crc kubenswrapper[4727]: E1210 14:55:00.013118 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Dec 10 14:55:00 crc kubenswrapper[4727]: E1210 14:55:00.013725 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n674h86h57fh5cdh5b4h65chdbhc7hd6hc8h5ffh64dh58ch694h57dh59dh689h7fh65ch67bh686h8bh685h554h7bhc5h577hbbh565h558h579h64cq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xq5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(3a122a79-dced-4afa-bb4c-4c6cd806770a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:00 crc kubenswrapper[4727]: I1210 14:55:00.330086 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" Dec 10 14:55:00 crc kubenswrapper[4727]: I1210 14:55:00.331035 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" event={"ID":"6cc8350a-ea00-40a2-915f-3337cd27c244","Type":"ContainerDied","Data":"865155810d7188c8e3ffcb113e595a3850926f1ad6555bd27c31c0dc7f0ae553"} Dec 10 14:55:00 crc kubenswrapper[4727]: I1210 14:55:00.331153 4727 scope.go:117] "RemoveContainer" containerID="4a916cf44e74e4aee03ab4c1560c17868a9a8b4a99103e2867b0d4c3fd87bd14" Dec 10 14:55:00 crc kubenswrapper[4727]: E1210 14:55:00.332403 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-x2fq6" podUID="69a1889a-3ba4-463e-bd9a-4f417ca69280" Dec 10 14:55:00 crc kubenswrapper[4727]: I1210 14:55:00.436011 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fctzl"] Dec 10 14:55:00 crc kubenswrapper[4727]: I1210 14:55:00.459338 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fctzl"] Dec 10 14:55:00 crc kubenswrapper[4727]: I1210 14:55:00.564587 4727 scope.go:117] "RemoveContainer" containerID="3e0e9f17be973f8a24944d09e3127d5751d7e84ca0b391a68db1a16b4cf62656" Dec 10 14:55:00 crc kubenswrapper[4727]: I1210 14:55:00.592553 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" path="/var/lib/kubelet/pods/6cc8350a-ea00-40a2-915f-3337cd27c244/volumes" Dec 10 14:55:02 crc kubenswrapper[4727]: I1210 14:55:02.136357 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-fctzl" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.102:5353: i/o timeout" Dec 10 14:55:02 crc kubenswrapper[4727]: I1210 14:55:02.354888 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8216a031-5caf-4b21-9613-c798dd35dfb7","Type":"ContainerStarted","Data":"3e6a90c7af245bf5101ab8d0a5c85b5879fb7ea4429f8a77715634231728acc3"} Dec 10 14:55:03 crc kubenswrapper[4727]: I1210 14:55:03.365455 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e1e2f39a-c206-4375-bea1-db945f0b3003","Type":"ContainerStarted","Data":"ef3147a8b84c57bc98547ccd7c20f259d32458014bcd7237a2df5dbed4d40dcc"} Dec 10 14:55:04 crc kubenswrapper[4727]: I1210 14:55:04.422005 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"00ee5804-d85a-432e-9295-b018259dcf38","Type":"ContainerStarted","Data":"00a2999e6ab942d1c3e905fa86ba3b807a6d184e64e0393a0c75e66eef308c9e"} Dec 10 14:55:05 crc kubenswrapper[4727]: I1210 14:55:05.430815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f0273aa7-a359-4d67-9c86-7920c5d69e11","Type":"ContainerStarted","Data":"117a521c92803cef1ec9ea0fae63d4b617e88c18e1cfc00d7f0692073d3fa02f"} Dec 10 14:55:05 crc kubenswrapper[4727]: I1210 14:55:05.437565 4727 generic.go:334] "Generic (PLEG): container finished" podID="28ce859f-f595-4f9a-ad5d-1131acd951c7" containerID="8541719ee2bedad7a365e3c9dcbcc2e7fccae5a8505315d8a761a4f0bd1773e3" exitCode=0 Dec 10 14:55:05 crc kubenswrapper[4727]: I1210 14:55:05.437644 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28ce859f-f595-4f9a-ad5d-1131acd951c7","Type":"ContainerDied","Data":"8541719ee2bedad7a365e3c9dcbcc2e7fccae5a8505315d8a761a4f0bd1773e3"} Dec 10 14:55:06 crc kubenswrapper[4727]: I1210 14:55:06.448455 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" event={"ID":"2375881f-bb87-45f5-83a1-a314f445945d","Type":"ContainerStarted","Data":"9c25f8de23aa7cb02b99c56c1218e7f2a68b907c896fcb837b5620184abe3873"} Dec 10 14:55:06 crc kubenswrapper[4727]: I1210 14:55:06.449572 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:55:06 crc kubenswrapper[4727]: I1210 14:55:06.471359 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" podStartSLOduration=17.468097627 podStartE2EDuration="47.471316695s" podCreationTimestamp="2025-12-10 14:54:19 +0000 UTC" firstStartedPulling="2025-12-10 14:54:30.361932064 +0000 UTC m=+1374.556706606" lastFinishedPulling="2025-12-10 14:55:00.365151122 +0000 UTC m=+1404.559925674" observedRunningTime="2025-12-10 14:55:06.46635334 +0000 UTC m=+1410.661127892" watchObservedRunningTime="2025-12-10 14:55:06.471316695 +0000 UTC m=+1410.666091237" Dec 10 14:55:07 crc kubenswrapper[4727]: E1210 14:55:07.029969 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 10 14:55:07 crc kubenswrapper[4727]: E1210 14:55:07.030050 4727 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 10 14:55:07 crc kubenswrapper[4727]: E1210 14:55:07.030280 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-msms2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(5d7bed75-52e4-4b97-830b-f2b55f222732): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:55:07 crc kubenswrapper[4727]: E1210 14:55:07.032329 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="5d7bed75-52e4-4b97-830b-f2b55f222732" Dec 10 14:55:07 crc kubenswrapper[4727]: E1210 14:55:07.460175 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="5d7bed75-52e4-4b97-830b-f2b55f222732" Dec 10 14:55:07 crc kubenswrapper[4727]: E1210 14:55:07.769340 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="3a122a79-dced-4afa-bb4c-4c6cd806770a" Dec 10 14:55:07 crc kubenswrapper[4727]: E1210 14:55:07.801084 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="7b92e7c2-a91e-4b8d-9316-ea7fc4e90188" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.468287 4727 generic.go:334] "Generic (PLEG): container finished" podID="e1e2f39a-c206-4375-bea1-db945f0b3003" containerID="ef3147a8b84c57bc98547ccd7c20f259d32458014bcd7237a2df5dbed4d40dcc" exitCode=0 Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.468625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e1e2f39a-c206-4375-bea1-db945f0b3003","Type":"ContainerDied","Data":"ef3147a8b84c57bc98547ccd7c20f259d32458014bcd7237a2df5dbed4d40dcc"} Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.469853 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0","Type":"ContainerStarted","Data":"e0a379981c73b973631f01231f5b1510b285b57dde56a1cb1358a7a9ac72da91"} Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.470090 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.478817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" event={"ID":"98542d0d-88a6-49f6-a86e-36a5ba2d7b18","Type":"ContainerStarted","Data":"87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15"} Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.480038 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.495492 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3a122a79-dced-4afa-bb4c-4c6cd806770a","Type":"ContainerStarted","Data":"a75aff263276f4a0b57f8bdc9c81176513a0c093e11b0e30300531120d459966"} Dec 10 14:55:08 crc kubenswrapper[4727]: E1210 14:55:08.497792 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="3a122a79-dced-4afa-bb4c-4c6cd806770a" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.505572 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerStarted","Data":"d5cbbdaa3831a9b4f41b599a46dac5b70174964e0f13a445afa5da20fb67ea16"} Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.511588 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188","Type":"ContainerStarted","Data":"dd49434bdf6008be6f12dcffe52b013e6a4e7c84c2a9c873034ffafe5ef89a95"} Dec 10 14:55:08 crc kubenswrapper[4727]: E1210 14:55:08.513575 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="7b92e7c2-a91e-4b8d-9316-ea7fc4e90188" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.519482 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.519535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" event={"ID":"26378e88-9d49-423c-b1a9-91a70f08b760","Type":"ContainerStarted","Data":"e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9"} Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.520306 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.522874 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28ce859f-f595-4f9a-ad5d-1131acd951c7","Type":"ContainerStarted","Data":"23a3f6ec9e03d757e9895cc9cc380b400cb166fdfc4adc24fc70e95f4a2e1aea"} Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.523717 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.529494 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb42bc94-d07f-4121-8591-8f868d089a2a" containerID="732d9edd0be705a0b93cd2c25593e301b6b2ce2b5bb41f7567d882a52ca21ff9" exitCode=0 Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.529555 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4tq5b" event={"ID":"fb42bc94-d07f-4121-8591-8f868d089a2a","Type":"ContainerDied","Data":"732d9edd0be705a0b93cd2c25593e301b6b2ce2b5bb41f7567d882a52ca21ff9"} Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.530121 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" podStartSLOduration=12.067243128 podStartE2EDuration="49.530100656s" podCreationTimestamp="2025-12-10 14:54:19 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.640399929 +0000 UTC m=+1373.835174471" lastFinishedPulling="2025-12-10 14:55:07.103257457 +0000 UTC m=+1411.298031999" observedRunningTime="2025-12-10 14:55:08.530037835 +0000 UTC m=+1412.724812377" watchObservedRunningTime="2025-12-10 14:55:08.530100656 +0000 UTC m=+1412.724875198" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.552099 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=-9223371987.3027 podStartE2EDuration="49.552076161s" podCreationTimestamp="2025-12-10 14:54:19 +0000 UTC" firstStartedPulling="2025-12-10 14:54:30.36020574 +0000 UTC m=+1374.554980282" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:08.54846122 +0000 UTC m=+1412.743235762" watchObservedRunningTime="2025-12-10 14:55:08.552076161 +0000 UTC m=+1412.746850703" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.578819 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.646522 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" podStartSLOduration=13.651996766 podStartE2EDuration="49.646501576s" podCreationTimestamp="2025-12-10 14:54:19 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.623943043 +0000 UTC m=+1373.818717595" lastFinishedPulling="2025-12-10 14:55:05.618447863 +0000 UTC m=+1409.813222405" observedRunningTime="2025-12-10 14:55:08.636705739 +0000 UTC m=+1412.831480281" watchObservedRunningTime="2025-12-10 14:55:08.646501576 +0000 UTC m=+1412.841276128" Dec 10 14:55:08 crc kubenswrapper[4727]: I1210 14:55:08.689159 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=66.883577206 podStartE2EDuration="1m7.689136603s" podCreationTimestamp="2025-12-10 14:54:01 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.630811137 +0000 UTC m=+1373.825585679" lastFinishedPulling="2025-12-10 14:54:30.436370534 +0000 UTC m=+1374.631145076" observedRunningTime="2025-12-10 14:55:08.680603518 +0000 UTC m=+1412.875378060" watchObservedRunningTime="2025-12-10 14:55:08.689136603 +0000 UTC m=+1412.883911145" Dec 10 14:55:09 crc kubenswrapper[4727]: I1210 14:55:09.539492 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4tq5b" event={"ID":"fb42bc94-d07f-4121-8591-8f868d089a2a","Type":"ContainerStarted","Data":"2bbf9b1741a6744a5479e824e6ffe6f1b3dd02ae5f0f09f376c9ce7fd34cbf88"} Dec 10 14:55:09 crc kubenswrapper[4727]: I1210 14:55:09.539978 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:55:09 crc kubenswrapper[4727]: I1210 14:55:09.539999 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4tq5b" event={"ID":"fb42bc94-d07f-4121-8591-8f868d089a2a","Type":"ContainerStarted","Data":"c783de9a20dcd9597bd779a47e91b978a4fbab2249bec14289fa0a696ec339e5"} Dec 10 14:55:09 crc kubenswrapper[4727]: I1210 14:55:09.540013 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:55:09 crc kubenswrapper[4727]: I1210 14:55:09.541925 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e1e2f39a-c206-4375-bea1-db945f0b3003","Type":"ContainerStarted","Data":"c575ee6c6043f4a87479272fe73068cc82b395d2d7a0bbc2933ca9ac80a2bdcc"} Dec 10 14:55:09 crc kubenswrapper[4727]: E1210 14:55:09.544384 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="3a122a79-dced-4afa-bb4c-4c6cd806770a" Dec 10 14:55:09 crc kubenswrapper[4727]: I1210 14:55:09.577598 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4tq5b" podStartSLOduration=28.696712206 podStartE2EDuration="58.577577494s" podCreationTimestamp="2025-12-10 14:54:11 +0000 UTC" firstStartedPulling="2025-12-10 14:54:30.488536332 +0000 UTC m=+1374.683310874" lastFinishedPulling="2025-12-10 14:55:00.36940162 +0000 UTC m=+1404.564176162" observedRunningTime="2025-12-10 14:55:09.57662109 +0000 UTC m=+1413.771395632" watchObservedRunningTime="2025-12-10 14:55:09.577577494 +0000 UTC m=+1413.772352036" Dec 10 14:55:09 crc kubenswrapper[4727]: I1210 14:55:09.675511 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.727271678 podStartE2EDuration="1m5.675492528s" podCreationTimestamp="2025-12-10 14:54:04 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.639898416 +0000 UTC m=+1373.834672958" lastFinishedPulling="2025-12-10 14:54:59.588119256 +0000 UTC m=+1403.782893808" observedRunningTime="2025-12-10 14:55:09.670380008 +0000 UTC m=+1413.865154550" watchObservedRunningTime="2025-12-10 14:55:09.675492528 +0000 UTC m=+1413.870267070" Dec 10 14:55:10 crc kubenswrapper[4727]: I1210 14:55:10.555603 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"cc5506b0-8e70-49c6-8783-5802aca6f72e","Type":"ContainerStarted","Data":"6c32b0d6e5e6077b749e9c0363ea9cf18f2d35e78f9a420832ce9108a8f6a5f2"} Dec 10 14:55:11 crc kubenswrapper[4727]: I1210 14:55:11.566271 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:55:11 crc kubenswrapper[4727]: I1210 14:55:11.643257 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=-9223371984.21154 podStartE2EDuration="52.64323498s" podCreationTimestamp="2025-12-10 14:54:19 +0000 UTC" firstStartedPulling="2025-12-10 14:54:30.360257582 +0000 UTC m=+1374.555032114" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:11.640968183 +0000 UTC m=+1415.835742735" watchObservedRunningTime="2025-12-10 14:55:11.64323498 +0000 UTC m=+1415.838009522" Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.585244 4727 generic.go:334] "Generic (PLEG): container finished" podID="00ee5804-d85a-432e-9295-b018259dcf38" containerID="00a2999e6ab942d1c3e905fa86ba3b807a6d184e64e0393a0c75e66eef308c9e" exitCode=0 Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.585810 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"00ee5804-d85a-432e-9295-b018259dcf38","Type":"ContainerDied","Data":"00a2999e6ab942d1c3e905fa86ba3b807a6d184e64e0393a0c75e66eef308c9e"} Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.591320 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" event={"ID":"382fdc05-9257-41ca-a032-1dc84b1483e3","Type":"ContainerStarted","Data":"8a80f8a4651493d653683196e9a9da7d12c13d8d0272e1faf3201d7c6dfddf83"} Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.591553 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.594506 4727 generic.go:334] "Generic (PLEG): container finished" podID="f0273aa7-a359-4d67-9c86-7920c5d69e11" containerID="117a521c92803cef1ec9ea0fae63d4b617e88c18e1cfc00d7f0692073d3fa02f" exitCode=0 Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.594577 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f0273aa7-a359-4d67-9c86-7920c5d69e11","Type":"ContainerDied","Data":"117a521c92803cef1ec9ea0fae63d4b617e88c18e1cfc00d7f0692073d3fa02f"} Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.600509 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" event={"ID":"b9128ddd-7fb4-41b8-a218-2201b9ef57ad","Type":"ContainerStarted","Data":"73168f026b7ee2a0518c131b8a8d944ae49445e3d21af643bce690d59f2f9790"} Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.601026 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.601806 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"f429b94a-d632-41b7-85c3-584f6dfe4475","Type":"ContainerStarted","Data":"94bbc5178158254c69445da8de8b9cb648167455f1604df38821cd96546171bc"} Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.602075 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.650239 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" podStartSLOduration=-9223371982.204586 podStartE2EDuration="54.650188913s" podCreationTimestamp="2025-12-10 14:54:19 +0000 UTC" firstStartedPulling="2025-12-10 14:54:30.295317351 +0000 UTC m=+1374.490091893" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:13.639262587 +0000 UTC m=+1417.834037139" watchObservedRunningTime="2025-12-10 14:55:13.650188913 +0000 UTC m=+1417.844963455" Dec 10 14:55:13 crc kubenswrapper[4727]: I1210 14:55:13.662461 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=-9223371982.192339 podStartE2EDuration="54.662437573s" podCreationTimestamp="2025-12-10 14:54:19 +0000 UTC" firstStartedPulling="2025-12-10 14:54:30.360494678 +0000 UTC m=+1374.555269220" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:13.661402417 +0000 UTC m=+1417.856176979" watchObservedRunningTime="2025-12-10 14:55:13.662437573 +0000 UTC m=+1417.857212115" Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.597881 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" podStartSLOduration=-9223371981.25692 podStartE2EDuration="55.59785673s" podCreationTimestamp="2025-12-10 14:54:19 +0000 UTC" firstStartedPulling="2025-12-10 14:54:30.292284325 +0000 UTC m=+1374.487058867" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:13.702673429 +0000 UTC m=+1417.897447971" watchObservedRunningTime="2025-12-10 14:55:14.59785673 +0000 UTC m=+1418.792631272" Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.623362 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6e710c7e-8c31-487b-ade5-a403f619e489","Type":"ContainerStarted","Data":"f4712957928df629f3847c6c71e73c9425a4c220459a2f735c972c3c1069e139"} Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.623572 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.626755 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f0273aa7-a359-4d67-9c86-7920c5d69e11","Type":"ContainerStarted","Data":"1dfa9bc6783859796c17dd9198102806942a7d3f7ce7f7783b65e9e407984802"} Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.628945 4727 generic.go:334] "Generic (PLEG): container finished" podID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerID="d5cbbdaa3831a9b4f41b599a46dac5b70174964e0f13a445afa5da20fb67ea16" exitCode=0 Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.629034 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerDied","Data":"d5cbbdaa3831a9b4f41b599a46dac5b70174964e0f13a445afa5da20fb67ea16"} Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.631645 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7b92e7c2-a91e-4b8d-9316-ea7fc4e90188","Type":"ContainerStarted","Data":"b1d512ded8cc5044cb4d13c6e75fdeac75cbb3695c2e3f4a224e9216aa0c61b3"} Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.651491 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.953035475 podStartE2EDuration="1m8.651467174s" podCreationTimestamp="2025-12-10 14:54:06 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.62064653 +0000 UTC m=+1373.815421072" lastFinishedPulling="2025-12-10 14:55:13.319078229 +0000 UTC m=+1417.513852771" observedRunningTime="2025-12-10 14:55:14.646516109 +0000 UTC m=+1418.841290661" watchObservedRunningTime="2025-12-10 14:55:14.651467174 +0000 UTC m=+1418.846241736" Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.704537 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.13617226 podStartE2EDuration="58.704515234s" podCreationTimestamp="2025-12-10 14:54:16 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.748403507 +0000 UTC m=+1373.943178049" lastFinishedPulling="2025-12-10 14:55:13.316746481 +0000 UTC m=+1417.511521023" observedRunningTime="2025-12-10 14:55:14.697754574 +0000 UTC m=+1418.892529116" watchObservedRunningTime="2025-12-10 14:55:14.704515234 +0000 UTC m=+1418.899289776" Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.719496 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=41.377010665 podStartE2EDuration="1m11.719473252s" podCreationTimestamp="2025-12-10 14:54:03 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.245476504 +0000 UTC m=+1373.440251046" lastFinishedPulling="2025-12-10 14:54:59.587939091 +0000 UTC m=+1403.782713633" observedRunningTime="2025-12-10 14:55:14.718787075 +0000 UTC m=+1418.913561627" watchObservedRunningTime="2025-12-10 14:55:14.719473252 +0000 UTC m=+1418.914247794" Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.972655 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 10 14:55:14 crc kubenswrapper[4727]: I1210 14:55:14.972708 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 10 14:55:16 crc kubenswrapper[4727]: I1210 14:55:16.313512 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 10 14:55:16 crc kubenswrapper[4727]: I1210 14:55:16.313923 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 10 14:55:17 crc kubenswrapper[4727]: I1210 14:55:17.408074 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 10 14:55:17 crc kubenswrapper[4727]: I1210 14:55:17.408653 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 10 14:55:17 crc kubenswrapper[4727]: I1210 14:55:17.450278 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 10 14:55:18 crc kubenswrapper[4727]: I1210 14:55:18.508606 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 10 14:55:18 crc kubenswrapper[4727]: I1210 14:55:18.619878 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 10 14:55:18 crc kubenswrapper[4727]: I1210 14:55:18.684356 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2fq6" event={"ID":"69a1889a-3ba4-463e-bd9a-4f417ca69280","Type":"ContainerStarted","Data":"967f4e7726bdc301de7494f41ff6c475074acba1015a1be2fb587b80ab25891b"} Dec 10 14:55:18 crc kubenswrapper[4727]: I1210 14:55:18.685288 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-x2fq6" Dec 10 14:55:18 crc kubenswrapper[4727]: I1210 14:55:18.774341 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x2fq6" podStartSLOduration=19.115751917 podStartE2EDuration="1m7.774318492s" podCreationTimestamp="2025-12-10 14:54:11 +0000 UTC" firstStartedPulling="2025-12-10 14:54:28.731607665 +0000 UTC m=+1372.926382207" lastFinishedPulling="2025-12-10 14:55:17.39017423 +0000 UTC m=+1421.584948782" observedRunningTime="2025-12-10 14:55:18.755627609 +0000 UTC m=+1422.950402151" watchObservedRunningTime="2025-12-10 14:55:18.774318492 +0000 UTC m=+1422.969093034" Dec 10 14:55:18 crc kubenswrapper[4727]: I1210 14:55:18.794575 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.160841 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2gb4z"] Dec 10 14:55:19 crc kubenswrapper[4727]: E1210 14:55:19.161590 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="init" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.161606 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="init" Dec 10 14:55:19 crc kubenswrapper[4727]: E1210 14:55:19.161633 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="dnsmasq-dns" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.161642 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="dnsmasq-dns" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.161816 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc8350a-ea00-40a2-915f-3337cd27c244" containerName="dnsmasq-dns" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.162993 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.168562 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.182897 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2gb4z"] Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.255834 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.255956 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-config\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.256376 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.256516 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmd2t\" (UniqueName: \"kubernetes.io/projected/7e4a7719-fa50-463e-b188-2fc3ba554d27-kube-api-access-rmd2t\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.278165 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vdzvd"] Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.279805 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.282215 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.298411 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vdzvd"] Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.358451 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.358541 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/afafee8a-246f-4de6-90c8-efb386092985-ovs-rundir\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.358579 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/afafee8a-246f-4de6-90c8-efb386092985-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.358610 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmd2t\" (UniqueName: \"kubernetes.io/projected/7e4a7719-fa50-463e-b188-2fc3ba554d27-kube-api-access-rmd2t\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.358860 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.359164 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afafee8a-246f-4de6-90c8-efb386092985-config\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.359208 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-config\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.359315 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afafee8a-246f-4de6-90c8-efb386092985-combined-ca-bundle\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.359368 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/afafee8a-246f-4de6-90c8-efb386092985-ovn-rundir\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.359422 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-647kk\" (UniqueName: \"kubernetes.io/projected/afafee8a-246f-4de6-90c8-efb386092985-kube-api-access-647kk\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.359774 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.359805 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.360554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-config\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.379213 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmd2t\" (UniqueName: \"kubernetes.io/projected/7e4a7719-fa50-463e-b188-2fc3ba554d27-kube-api-access-rmd2t\") pod \"dnsmasq-dns-7f896c8c65-2gb4z\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.461700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-647kk\" (UniqueName: \"kubernetes.io/projected/afafee8a-246f-4de6-90c8-efb386092985-kube-api-access-647kk\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.461816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/afafee8a-246f-4de6-90c8-efb386092985-ovs-rundir\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.461847 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/afafee8a-246f-4de6-90c8-efb386092985-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.461951 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afafee8a-246f-4de6-90c8-efb386092985-config\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.461995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afafee8a-246f-4de6-90c8-efb386092985-combined-ca-bundle\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.462018 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/afafee8a-246f-4de6-90c8-efb386092985-ovn-rundir\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.462185 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/afafee8a-246f-4de6-90c8-efb386092985-ovn-rundir\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.462194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/afafee8a-246f-4de6-90c8-efb386092985-ovs-rundir\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.464328 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afafee8a-246f-4de6-90c8-efb386092985-config\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.469674 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/afafee8a-246f-4de6-90c8-efb386092985-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.471353 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afafee8a-246f-4de6-90c8-efb386092985-combined-ca-bundle\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.487116 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.519578 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-647kk\" (UniqueName: \"kubernetes.io/projected/afafee8a-246f-4de6-90c8-efb386092985-kube-api-access-647kk\") pod \"ovn-controller-metrics-vdzvd\" (UID: \"afafee8a-246f-4de6-90c8-efb386092985\") " pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.535435 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2gb4z"] Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.581201 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mplp7"] Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.583517 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.586936 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.592050 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mplp7"] Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.605942 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vdzvd" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.669574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzwg\" (UniqueName: \"kubernetes.io/projected/6139edab-f197-4da9-88c6-c8625947ab08-kube-api-access-9qzwg\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.669878 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.669986 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-config\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.670118 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.670276 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.719301 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"00ee5804-d85a-432e-9295-b018259dcf38","Type":"ContainerStarted","Data":"155b0155042da9873c8bbf6818b1d91e1d67b72e85ca1fc33558c2774b992efd"} Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.771477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.771572 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.771649 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qzwg\" (UniqueName: \"kubernetes.io/projected/6139edab-f197-4da9-88c6-c8625947ab08-kube-api-access-9qzwg\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.771725 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.771770 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-config\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.772481 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.773312 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.774282 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.774898 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-config\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:19 crc kubenswrapper[4727]: I1210 14:55:19.799147 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qzwg\" (UniqueName: \"kubernetes.io/projected/6139edab-f197-4da9-88c6-c8625947ab08-kube-api-access-9qzwg\") pod \"dnsmasq-dns-86db49b7ff-mplp7\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.065564 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.096587 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.172724 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vdzvd"] Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.296037 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2gb4z"] Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.736783 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vdzvd" event={"ID":"afafee8a-246f-4de6-90c8-efb386092985","Type":"ContainerStarted","Data":"99ce582173a486dd7b400770a7dae17d8a6bb95c3001627d28614ef5a8807ecc"} Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.737745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vdzvd" event={"ID":"afafee8a-246f-4de6-90c8-efb386092985","Type":"ContainerStarted","Data":"345cbe3d1b6cf9c622eadb6efdc859e1f39d27247fd523fbcd32764df01d4736"} Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.740629 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d7bed75-52e4-4b97-830b-f2b55f222732","Type":"ContainerStarted","Data":"1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa"} Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.741174 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.747129 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e4a7719-fa50-463e-b188-2fc3ba554d27" containerID="2920dab5f8f9ff34d0e1661921f5a782f2ceebbc8c5ef84b415e135917b88eeb" exitCode=0 Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.747351 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" event={"ID":"7e4a7719-fa50-463e-b188-2fc3ba554d27","Type":"ContainerDied","Data":"2920dab5f8f9ff34d0e1661921f5a782f2ceebbc8c5ef84b415e135917b88eeb"} Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.747491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" event={"ID":"7e4a7719-fa50-463e-b188-2fc3ba554d27","Type":"ContainerStarted","Data":"40b32953a1beb0318d50ca4971636238784a1356637c795ca041a9ea17742a71"} Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.763148 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vdzvd" podStartSLOduration=1.7631216539999999 podStartE2EDuration="1.763121654s" podCreationTimestamp="2025-12-10 14:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:20.756873827 +0000 UTC m=+1424.951648369" watchObservedRunningTime="2025-12-10 14:55:20.763121654 +0000 UTC m=+1424.957896206" Dec 10 14:55:20 crc kubenswrapper[4727]: W1210 14:55:20.784779 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6139edab_f197_4da9_88c6_c8625947ab08.slice/crio-1b77fd2a303ae5cce2b03c1e20768494dcf64f035828007ef7ddc858f0929d2e WatchSource:0}: Error finding container 1b77fd2a303ae5cce2b03c1e20768494dcf64f035828007ef7ddc858f0929d2e: Status 404 returned error can't find the container with id 1b77fd2a303ae5cce2b03c1e20768494dcf64f035828007ef7ddc858f0929d2e Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.788312 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mplp7"] Dec 10 14:55:20 crc kubenswrapper[4727]: I1210 14:55:20.794852 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.941513832 podStartE2EDuration="1m12.794831625s" podCreationTimestamp="2025-12-10 14:54:08 +0000 UTC" firstStartedPulling="2025-12-10 14:54:30.348778352 +0000 UTC m=+1374.543552904" lastFinishedPulling="2025-12-10 14:55:20.202096155 +0000 UTC m=+1424.396870697" observedRunningTime="2025-12-10 14:55:20.78235448 +0000 UTC m=+1424.977129022" watchObservedRunningTime="2025-12-10 14:55:20.794831625 +0000 UTC m=+1424.989606167" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.089443 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.116318 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.223861 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.248806 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-ovsdbserver-sb\") pod \"7e4a7719-fa50-463e-b188-2fc3ba554d27\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.249052 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmd2t\" (UniqueName: \"kubernetes.io/projected/7e4a7719-fa50-463e-b188-2fc3ba554d27-kube-api-access-rmd2t\") pod \"7e4a7719-fa50-463e-b188-2fc3ba554d27\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.249443 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-config\") pod \"7e4a7719-fa50-463e-b188-2fc3ba554d27\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.249500 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-dns-svc\") pod \"7e4a7719-fa50-463e-b188-2fc3ba554d27\" (UID: \"7e4a7719-fa50-463e-b188-2fc3ba554d27\") " Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.345171 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4a7719-fa50-463e-b188-2fc3ba554d27-kube-api-access-rmd2t" (OuterVolumeSpecName: "kube-api-access-rmd2t") pod "7e4a7719-fa50-463e-b188-2fc3ba554d27" (UID: "7e4a7719-fa50-463e-b188-2fc3ba554d27"). InnerVolumeSpecName "kube-api-access-rmd2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.347232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-config" (OuterVolumeSpecName: "config") pod "7e4a7719-fa50-463e-b188-2fc3ba554d27" (UID: "7e4a7719-fa50-463e-b188-2fc3ba554d27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.351741 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmd2t\" (UniqueName: \"kubernetes.io/projected/7e4a7719-fa50-463e-b188-2fc3ba554d27-kube-api-access-rmd2t\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.351786 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.359463 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e4a7719-fa50-463e-b188-2fc3ba554d27" (UID: "7e4a7719-fa50-463e-b188-2fc3ba554d27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.391931 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7e4a7719-fa50-463e-b188-2fc3ba554d27" (UID: "7e4a7719-fa50-463e-b188-2fc3ba554d27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.454042 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.454100 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4a7719-fa50-463e-b188-2fc3ba554d27-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.758614 4727 generic.go:334] "Generic (PLEG): container finished" podID="6139edab-f197-4da9-88c6-c8625947ab08" containerID="88b163c24328359cd22c95ce585d7ad8aaee4f5b5ee573727dcfe5a55a3e652c" exitCode=0 Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.758693 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" event={"ID":"6139edab-f197-4da9-88c6-c8625947ab08","Type":"ContainerDied","Data":"88b163c24328359cd22c95ce585d7ad8aaee4f5b5ee573727dcfe5a55a3e652c"} Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.758759 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" event={"ID":"6139edab-f197-4da9-88c6-c8625947ab08","Type":"ContainerStarted","Data":"1b77fd2a303ae5cce2b03c1e20768494dcf64f035828007ef7ddc858f0929d2e"} Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.761166 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" event={"ID":"7e4a7719-fa50-463e-b188-2fc3ba554d27","Type":"ContainerDied","Data":"40b32953a1beb0318d50ca4971636238784a1356637c795ca041a9ea17742a71"} Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.761238 4727 scope.go:117] "RemoveContainer" containerID="2920dab5f8f9ff34d0e1661921f5a782f2ceebbc8c5ef84b415e135917b88eeb" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.761237 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-2gb4z" Dec 10 14:55:21 crc kubenswrapper[4727]: I1210 14:55:21.799221 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.071632 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2gb4z"] Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.080125 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2gb4z"] Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.142919 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a77b-account-create-update-zhrh8"] Dec 10 14:55:22 crc kubenswrapper[4727]: E1210 14:55:22.143316 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4a7719-fa50-463e-b188-2fc3ba554d27" containerName="init" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.143334 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4a7719-fa50-463e-b188-2fc3ba554d27" containerName="init" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.143524 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4a7719-fa50-463e-b188-2fc3ba554d27" containerName="init" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.144227 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.150114 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.154626 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a77b-account-create-update-zhrh8"] Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.180478 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fds6f"] Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.181759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fds6f" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.195669 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fds6f"] Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.270668 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt82m\" (UniqueName: \"kubernetes.io/projected/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-kube-api-access-zt82m\") pod \"glance-a77b-account-create-update-zhrh8\" (UID: \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\") " pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.270805 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n6m4\" (UniqueName: \"kubernetes.io/projected/7df0cafa-4dc3-445a-8107-1098c218c787-kube-api-access-4n6m4\") pod \"glance-db-create-fds6f\" (UID: \"7df0cafa-4dc3-445a-8107-1098c218c787\") " pod="openstack/glance-db-create-fds6f" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.270850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7df0cafa-4dc3-445a-8107-1098c218c787-operator-scripts\") pod \"glance-db-create-fds6f\" (UID: \"7df0cafa-4dc3-445a-8107-1098c218c787\") " pod="openstack/glance-db-create-fds6f" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.270950 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-operator-scripts\") pod \"glance-a77b-account-create-update-zhrh8\" (UID: \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\") " pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.373685 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt82m\" (UniqueName: \"kubernetes.io/projected/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-kube-api-access-zt82m\") pod \"glance-a77b-account-create-update-zhrh8\" (UID: \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\") " pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.373823 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n6m4\" (UniqueName: \"kubernetes.io/projected/7df0cafa-4dc3-445a-8107-1098c218c787-kube-api-access-4n6m4\") pod \"glance-db-create-fds6f\" (UID: \"7df0cafa-4dc3-445a-8107-1098c218c787\") " pod="openstack/glance-db-create-fds6f" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.373864 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7df0cafa-4dc3-445a-8107-1098c218c787-operator-scripts\") pod \"glance-db-create-fds6f\" (UID: \"7df0cafa-4dc3-445a-8107-1098c218c787\") " pod="openstack/glance-db-create-fds6f" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.373953 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-operator-scripts\") pod \"glance-a77b-account-create-update-zhrh8\" (UID: \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\") " pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.374867 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7df0cafa-4dc3-445a-8107-1098c218c787-operator-scripts\") pod \"glance-db-create-fds6f\" (UID: \"7df0cafa-4dc3-445a-8107-1098c218c787\") " pod="openstack/glance-db-create-fds6f" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.375060 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-operator-scripts\") pod \"glance-a77b-account-create-update-zhrh8\" (UID: \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\") " pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.399037 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt82m\" (UniqueName: \"kubernetes.io/projected/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-kube-api-access-zt82m\") pod \"glance-a77b-account-create-update-zhrh8\" (UID: \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\") " pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.399460 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n6m4\" (UniqueName: \"kubernetes.io/projected/7df0cafa-4dc3-445a-8107-1098c218c787-kube-api-access-4n6m4\") pod \"glance-db-create-fds6f\" (UID: \"7df0cafa-4dc3-445a-8107-1098c218c787\") " pod="openstack/glance-db-create-fds6f" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.475616 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.506249 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fds6f" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.583630 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4a7719-fa50-463e-b188-2fc3ba554d27" path="/var/lib/kubelet/pods/7e4a7719-fa50-463e-b188-2fc3ba554d27/volumes" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.774219 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" event={"ID":"6139edab-f197-4da9-88c6-c8625947ab08","Type":"ContainerStarted","Data":"52815581f412a639d29f38fc4d2de1a40eac910e17a4cfb2379e66a2a5de8674"} Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.776045 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.782017 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3a122a79-dced-4afa-bb4c-4c6cd806770a","Type":"ContainerStarted","Data":"f7532498b53b8cdb4345dc3562b195598c2abc831ec1c95a45fa15f2a94a510c"} Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.790956 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"00ee5804-d85a-432e-9295-b018259dcf38","Type":"ContainerStarted","Data":"fecc172086245f5868b0cec42b58d9d2dafff7b6e2efcde300c0ed5629d11994"} Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.791585 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.797709 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.803781 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" podStartSLOduration=3.803751763 podStartE2EDuration="3.803751763s" podCreationTimestamp="2025-12-10 14:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:22.797269769 +0000 UTC m=+1426.992044311" watchObservedRunningTime="2025-12-10 14:55:22.803751763 +0000 UTC m=+1426.998526325" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.826280 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=24.558275852 podStartE2EDuration="1m13.826253401s" podCreationTimestamp="2025-12-10 14:54:09 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.299606081 +0000 UTC m=+1373.494380623" lastFinishedPulling="2025-12-10 14:55:18.56758363 +0000 UTC m=+1422.762358172" observedRunningTime="2025-12-10 14:55:22.823083241 +0000 UTC m=+1427.017857803" watchObservedRunningTime="2025-12-10 14:55:22.826253401 +0000 UTC m=+1427.021027943" Dec 10 14:55:22 crc kubenswrapper[4727]: I1210 14:55:22.863043 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.646301622 podStartE2EDuration="1m10.863019939s" podCreationTimestamp="2025-12-10 14:54:12 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.051354371 +0000 UTC m=+1373.246128913" lastFinishedPulling="2025-12-10 14:55:21.268072688 +0000 UTC m=+1425.462847230" observedRunningTime="2025-12-10 14:55:22.846066091 +0000 UTC m=+1427.040840643" watchObservedRunningTime="2025-12-10 14:55:22.863019939 +0000 UTC m=+1427.057794481" Dec 10 14:55:23 crc kubenswrapper[4727]: I1210 14:55:23.071721 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a77b-account-create-update-zhrh8"] Dec 10 14:55:23 crc kubenswrapper[4727]: I1210 14:55:23.082616 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fds6f"] Dec 10 14:55:23 crc kubenswrapper[4727]: W1210 14:55:23.097337 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7df0cafa_4dc3_445a_8107_1098c218c787.slice/crio-151a1dca145384f63750aeed6ecce37f4e9ee1ba1c1790c35360a07d339884d4 WatchSource:0}: Error finding container 151a1dca145384f63750aeed6ecce37f4e9ee1ba1c1790c35360a07d339884d4: Status 404 returned error can't find the container with id 151a1dca145384f63750aeed6ecce37f4e9ee1ba1c1790c35360a07d339884d4 Dec 10 14:55:23 crc kubenswrapper[4727]: W1210 14:55:23.099395 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d9ff9d9_b83b_4eaa_a0f1_629cb04471d6.slice/crio-439b73c4a95a4290a9703e69026e808eff3535fd96304a45e49ff4ce7e13cfc8 WatchSource:0}: Error finding container 439b73c4a95a4290a9703e69026e808eff3535fd96304a45e49ff4ce7e13cfc8: Status 404 returned error can't find the container with id 439b73c4a95a4290a9703e69026e808eff3535fd96304a45e49ff4ce7e13cfc8 Dec 10 14:55:23 crc kubenswrapper[4727]: I1210 14:55:23.295675 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 10 14:55:23 crc kubenswrapper[4727]: I1210 14:55:23.807321 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a77b-account-create-update-zhrh8" event={"ID":"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6","Type":"ContainerStarted","Data":"439b73c4a95a4290a9703e69026e808eff3535fd96304a45e49ff4ce7e13cfc8"} Dec 10 14:55:23 crc kubenswrapper[4727]: I1210 14:55:23.809023 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fds6f" event={"ID":"7df0cafa-4dc3-445a-8107-1098c218c787","Type":"ContainerStarted","Data":"151a1dca145384f63750aeed6ecce37f4e9ee1ba1c1790c35360a07d339884d4"} Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.296212 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.350232 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.735747 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-n2gx8"] Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.737204 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.753759 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-224d-account-create-update-897d4"] Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.755439 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.763340 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.768930 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-n2gx8"] Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.784254 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-224d-account-create-update-897d4"] Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.833239 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a77b-account-create-update-zhrh8" event={"ID":"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6","Type":"ContainerStarted","Data":"3769165c241d7118262dd1e3a26c27407f49bf97a5becf0a86cb525a1bbb4249"} Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.838725 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9724e12-6c3d-48c9-b783-13520354dda1-operator-scripts\") pod \"cinder-224d-account-create-update-897d4\" (UID: \"e9724e12-6c3d-48c9-b783-13520354dda1\") " pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.838817 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/402116b0-924d-4dec-aece-9da581a05b83-operator-scripts\") pod \"cinder-db-create-n2gx8\" (UID: \"402116b0-924d-4dec-aece-9da581a05b83\") " pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.838850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fqjn\" (UniqueName: \"kubernetes.io/projected/402116b0-924d-4dec-aece-9da581a05b83-kube-api-access-8fqjn\") pod \"cinder-db-create-n2gx8\" (UID: \"402116b0-924d-4dec-aece-9da581a05b83\") " pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.838924 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6cvh\" (UniqueName: \"kubernetes.io/projected/e9724e12-6c3d-48c9-b783-13520354dda1-kube-api-access-s6cvh\") pod \"cinder-224d-account-create-update-897d4\" (UID: \"e9724e12-6c3d-48c9-b783-13520354dda1\") " pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.839606 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fds6f" event={"ID":"7df0cafa-4dc3-445a-8107-1098c218c787","Type":"ContainerStarted","Data":"82743c870535a32c398de2204838e3deb23b6440f8887bfda6cd2a86c53c07f2"} Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.859061 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ztlsl"] Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.860377 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.866380 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a77b-account-create-update-zhrh8" podStartSLOduration=2.866360647 podStartE2EDuration="2.866360647s" podCreationTimestamp="2025-12-10 14:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:24.859880073 +0000 UTC m=+1429.054654615" watchObservedRunningTime="2025-12-10 14:55:24.866360647 +0000 UTC m=+1429.061135189" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.903141 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ztlsl"] Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.905165 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-fds6f" podStartSLOduration=2.905143766 podStartE2EDuration="2.905143766s" podCreationTimestamp="2025-12-10 14:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:24.87956341 +0000 UTC m=+1429.074337972" watchObservedRunningTime="2025-12-10 14:55:24.905143766 +0000 UTC m=+1429.099918298" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.938888 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-cv5tz"] Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.941881 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/402116b0-924d-4dec-aece-9da581a05b83-operator-scripts\") pod \"cinder-db-create-n2gx8\" (UID: \"402116b0-924d-4dec-aece-9da581a05b83\") " pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.942519 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.940781 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/402116b0-924d-4dec-aece-9da581a05b83-operator-scripts\") pod \"cinder-db-create-n2gx8\" (UID: \"402116b0-924d-4dec-aece-9da581a05b83\") " pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.943074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fqjn\" (UniqueName: \"kubernetes.io/projected/402116b0-924d-4dec-aece-9da581a05b83-kube-api-access-8fqjn\") pod \"cinder-db-create-n2gx8\" (UID: \"402116b0-924d-4dec-aece-9da581a05b83\") " pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.943182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6cvh\" (UniqueName: \"kubernetes.io/projected/e9724e12-6c3d-48c9-b783-13520354dda1-kube-api-access-s6cvh\") pod \"cinder-224d-account-create-update-897d4\" (UID: \"e9724e12-6c3d-48c9-b783-13520354dda1\") " pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.943424 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-operator-scripts\") pod \"barbican-db-create-ztlsl\" (UID: \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\") " pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.943508 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p4tq\" (UniqueName: \"kubernetes.io/projected/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-kube-api-access-2p4tq\") pod \"barbican-db-create-ztlsl\" (UID: \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\") " pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.943541 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9724e12-6c3d-48c9-b783-13520354dda1-operator-scripts\") pod \"cinder-224d-account-create-update-897d4\" (UID: \"e9724e12-6c3d-48c9-b783-13520354dda1\") " pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.946102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9724e12-6c3d-48c9-b783-13520354dda1-operator-scripts\") pod \"cinder-224d-account-create-update-897d4\" (UID: \"e9724e12-6c3d-48c9-b783-13520354dda1\") " pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.954894 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-cv5tz"] Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.995084 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6cvh\" (UniqueName: \"kubernetes.io/projected/e9724e12-6c3d-48c9-b783-13520354dda1-kube-api-access-s6cvh\") pod \"cinder-224d-account-create-update-897d4\" (UID: \"e9724e12-6c3d-48c9-b783-13520354dda1\") " pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:24 crc kubenswrapper[4727]: I1210 14:55:24.995778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fqjn\" (UniqueName: \"kubernetes.io/projected/402116b0-924d-4dec-aece-9da581a05b83-kube-api-access-8fqjn\") pod \"cinder-db-create-n2gx8\" (UID: \"402116b0-924d-4dec-aece-9da581a05b83\") " pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.045053 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5fjj\" (UniqueName: \"kubernetes.io/projected/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-kube-api-access-h5fjj\") pod \"cloudkitty-db-create-cv5tz\" (UID: \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\") " pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.045139 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-operator-scripts\") pod \"barbican-db-create-ztlsl\" (UID: \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\") " pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.045186 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p4tq\" (UniqueName: \"kubernetes.io/projected/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-kube-api-access-2p4tq\") pod \"barbican-db-create-ztlsl\" (UID: \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\") " pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.045265 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-operator-scripts\") pod \"cloudkitty-db-create-cv5tz\" (UID: \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\") " pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.046255 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-operator-scripts\") pod \"barbican-db-create-ztlsl\" (UID: \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\") " pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.054276 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-0fdf-account-create-update-2tlkr"] Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.056172 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.059473 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.065083 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.065835 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-0fdf-account-create-update-2tlkr"] Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.091715 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.092576 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p4tq\" (UniqueName: \"kubernetes.io/projected/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-kube-api-access-2p4tq\") pod \"barbican-db-create-ztlsl\" (UID: \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\") " pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.147159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdhg7\" (UniqueName: \"kubernetes.io/projected/945edb14-70e4-40c5-a208-f14443517e42-kube-api-access-jdhg7\") pod \"cloudkitty-0fdf-account-create-update-2tlkr\" (UID: \"945edb14-70e4-40c5-a208-f14443517e42\") " pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.147220 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/945edb14-70e4-40c5-a208-f14443517e42-operator-scripts\") pod \"cloudkitty-0fdf-account-create-update-2tlkr\" (UID: \"945edb14-70e4-40c5-a208-f14443517e42\") " pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.147259 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5fjj\" (UniqueName: \"kubernetes.io/projected/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-kube-api-access-h5fjj\") pod \"cloudkitty-db-create-cv5tz\" (UID: \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\") " pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.147350 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-operator-scripts\") pod \"cloudkitty-db-create-cv5tz\" (UID: \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\") " pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.148037 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-operator-scripts\") pod \"cloudkitty-db-create-cv5tz\" (UID: \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\") " pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.159221 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-03a2-account-create-update-p7wsc"] Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.160516 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.165923 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.195379 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ndq58"] Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.196175 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.197278 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5fjj\" (UniqueName: \"kubernetes.io/projected/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-kube-api-access-h5fjj\") pod \"cloudkitty-db-create-cv5tz\" (UID: \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\") " pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.203697 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.207114 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-03a2-account-create-update-p7wsc"] Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.222965 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ndq58"] Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.249704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5pf\" (UniqueName: \"kubernetes.io/projected/577fad75-56b7-4ad0-89c9-b44f0c771ef7-kube-api-access-wm5pf\") pod \"barbican-03a2-account-create-update-p7wsc\" (UID: \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\") " pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.250092 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdhg7\" (UniqueName: \"kubernetes.io/projected/945edb14-70e4-40c5-a208-f14443517e42-kube-api-access-jdhg7\") pod \"cloudkitty-0fdf-account-create-update-2tlkr\" (UID: \"945edb14-70e4-40c5-a208-f14443517e42\") " pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.250171 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/945edb14-70e4-40c5-a208-f14443517e42-operator-scripts\") pod \"cloudkitty-0fdf-account-create-update-2tlkr\" (UID: \"945edb14-70e4-40c5-a208-f14443517e42\") " pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.250882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/945edb14-70e4-40c5-a208-f14443517e42-operator-scripts\") pod \"cloudkitty-0fdf-account-create-update-2tlkr\" (UID: \"945edb14-70e4-40c5-a208-f14443517e42\") " pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.251191 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fad75-56b7-4ad0-89c9-b44f0c771ef7-operator-scripts\") pod \"barbican-03a2-account-create-update-p7wsc\" (UID: \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\") " pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.261501 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.320720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdhg7\" (UniqueName: \"kubernetes.io/projected/945edb14-70e4-40c5-a208-f14443517e42-kube-api-access-jdhg7\") pod \"cloudkitty-0fdf-account-create-update-2tlkr\" (UID: \"945edb14-70e4-40c5-a208-f14443517e42\") " pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.352591 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txgxt\" (UniqueName: \"kubernetes.io/projected/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-kube-api-access-txgxt\") pod \"neutron-db-create-ndq58\" (UID: \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\") " pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.352649 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-operator-scripts\") pod \"neutron-db-create-ndq58\" (UID: \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\") " pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.352704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fad75-56b7-4ad0-89c9-b44f0c771ef7-operator-scripts\") pod \"barbican-03a2-account-create-update-p7wsc\" (UID: \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\") " pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.352732 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm5pf\" (UniqueName: \"kubernetes.io/projected/577fad75-56b7-4ad0-89c9-b44f0c771ef7-kube-api-access-wm5pf\") pod \"barbican-03a2-account-create-update-p7wsc\" (UID: \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\") " pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.354403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fad75-56b7-4ad0-89c9-b44f0c771ef7-operator-scripts\") pod \"barbican-03a2-account-create-update-p7wsc\" (UID: \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\") " pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.402154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm5pf\" (UniqueName: \"kubernetes.io/projected/577fad75-56b7-4ad0-89c9-b44f0c771ef7-kube-api-access-wm5pf\") pod \"barbican-03a2-account-create-update-p7wsc\" (UID: \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\") " pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.407655 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-05f6-account-create-update-48sht"] Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.409109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.412473 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.422657 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-05f6-account-create-update-48sht"] Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.423759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.455177 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txgxt\" (UniqueName: \"kubernetes.io/projected/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-kube-api-access-txgxt\") pod \"neutron-db-create-ndq58\" (UID: \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\") " pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.455234 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-operator-scripts\") pod \"neutron-db-create-ndq58\" (UID: \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\") " pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.456014 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-operator-scripts\") pod \"neutron-db-create-ndq58\" (UID: \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\") " pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.475698 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txgxt\" (UniqueName: \"kubernetes.io/projected/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-kube-api-access-txgxt\") pod \"neutron-db-create-ndq58\" (UID: \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\") " pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.535142 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.541662 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.556438 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c0327a-5640-4478-8641-5e495745e5cd-operator-scripts\") pod \"neutron-05f6-account-create-update-48sht\" (UID: \"51c0327a-5640-4478-8641-5e495745e5cd\") " pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.556549 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvt8\" (UniqueName: \"kubernetes.io/projected/51c0327a-5640-4478-8641-5e495745e5cd-kube-api-access-jnvt8\") pod \"neutron-05f6-account-create-update-48sht\" (UID: \"51c0327a-5640-4478-8641-5e495745e5cd\") " pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.658348 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvt8\" (UniqueName: \"kubernetes.io/projected/51c0327a-5640-4478-8641-5e495745e5cd-kube-api-access-jnvt8\") pod \"neutron-05f6-account-create-update-48sht\" (UID: \"51c0327a-5640-4478-8641-5e495745e5cd\") " pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.658585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c0327a-5640-4478-8641-5e495745e5cd-operator-scripts\") pod \"neutron-05f6-account-create-update-48sht\" (UID: \"51c0327a-5640-4478-8641-5e495745e5cd\") " pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.659808 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c0327a-5640-4478-8641-5e495745e5cd-operator-scripts\") pod \"neutron-05f6-account-create-update-48sht\" (UID: \"51c0327a-5640-4478-8641-5e495745e5cd\") " pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.675857 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvt8\" (UniqueName: \"kubernetes.io/projected/51c0327a-5640-4478-8641-5e495745e5cd-kube-api-access-jnvt8\") pod \"neutron-05f6-account-create-update-48sht\" (UID: \"51c0327a-5640-4478-8641-5e495745e5cd\") " pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.732246 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.852435 4727 generic.go:334] "Generic (PLEG): container finished" podID="9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6" containerID="3769165c241d7118262dd1e3a26c27407f49bf97a5becf0a86cb525a1bbb4249" exitCode=0 Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.852995 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a77b-account-create-update-zhrh8" event={"ID":"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6","Type":"ContainerDied","Data":"3769165c241d7118262dd1e3a26c27407f49bf97a5becf0a86cb525a1bbb4249"} Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.858288 4727 generic.go:334] "Generic (PLEG): container finished" podID="7df0cafa-4dc3-445a-8107-1098c218c787" containerID="82743c870535a32c398de2204838e3deb23b6440f8887bfda6cd2a86c53c07f2" exitCode=0 Dec 10 14:55:25 crc kubenswrapper[4727]: I1210 14:55:25.858342 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fds6f" event={"ID":"7df0cafa-4dc3-445a-8107-1098c218c787","Type":"ContainerDied","Data":"82743c870535a32c398de2204838e3deb23b6440f8887bfda6cd2a86c53c07f2"} Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.369982 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fpmqv"] Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.372166 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.410025 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fpmqv"] Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.414752 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.479167 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5p92\" (UniqueName: \"kubernetes.io/projected/506cfe4b-7b71-418d-bba3-0e534380eea8-kube-api-access-t5p92\") pod \"keystone-db-create-fpmqv\" (UID: \"506cfe4b-7b71-418d-bba3-0e534380eea8\") " pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.479802 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/506cfe4b-7b71-418d-bba3-0e534380eea8-operator-scripts\") pod \"keystone-db-create-fpmqv\" (UID: \"506cfe4b-7b71-418d-bba3-0e534380eea8\") " pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.490108 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ebc3-account-create-update-cr7fz"] Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.492464 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.504562 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.518168 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ebc3-account-create-update-cr7fz"] Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.586470 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56bec11f-546b-44e3-9fbe-11468e08ebca-operator-scripts\") pod \"keystone-ebc3-account-create-update-cr7fz\" (UID: \"56bec11f-546b-44e3-9fbe-11468e08ebca\") " pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.586574 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5p92\" (UniqueName: \"kubernetes.io/projected/506cfe4b-7b71-418d-bba3-0e534380eea8-kube-api-access-t5p92\") pod \"keystone-db-create-fpmqv\" (UID: \"506cfe4b-7b71-418d-bba3-0e534380eea8\") " pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.586693 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb446\" (UniqueName: \"kubernetes.io/projected/56bec11f-546b-44e3-9fbe-11468e08ebca-kube-api-access-cb446\") pod \"keystone-ebc3-account-create-update-cr7fz\" (UID: \"56bec11f-546b-44e3-9fbe-11468e08ebca\") " pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.586765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/506cfe4b-7b71-418d-bba3-0e534380eea8-operator-scripts\") pod \"keystone-db-create-fpmqv\" (UID: \"506cfe4b-7b71-418d-bba3-0e534380eea8\") " pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.587811 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/506cfe4b-7b71-418d-bba3-0e534380eea8-operator-scripts\") pod \"keystone-db-create-fpmqv\" (UID: \"506cfe4b-7b71-418d-bba3-0e534380eea8\") " pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.667769 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5p92\" (UniqueName: \"kubernetes.io/projected/506cfe4b-7b71-418d-bba3-0e534380eea8-kube-api-access-t5p92\") pod \"keystone-db-create-fpmqv\" (UID: \"506cfe4b-7b71-418d-bba3-0e534380eea8\") " pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.686149 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-b8j8z"] Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.687706 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.691664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56bec11f-546b-44e3-9fbe-11468e08ebca-operator-scripts\") pod \"keystone-ebc3-account-create-update-cr7fz\" (UID: \"56bec11f-546b-44e3-9fbe-11468e08ebca\") " pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.692087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb446\" (UniqueName: \"kubernetes.io/projected/56bec11f-546b-44e3-9fbe-11468e08ebca-kube-api-access-cb446\") pod \"keystone-ebc3-account-create-update-cr7fz\" (UID: \"56bec11f-546b-44e3-9fbe-11468e08ebca\") " pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.693016 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b8j8z"] Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.694564 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56bec11f-546b-44e3-9fbe-11468e08ebca-operator-scripts\") pod \"keystone-ebc3-account-create-update-cr7fz\" (UID: \"56bec11f-546b-44e3-9fbe-11468e08ebca\") " pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.717542 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.718586 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb446\" (UniqueName: \"kubernetes.io/projected/56bec11f-546b-44e3-9fbe-11468e08ebca-kube-api-access-cb446\") pod \"keystone-ebc3-account-create-update-cr7fz\" (UID: \"56bec11f-546b-44e3-9fbe-11468e08ebca\") " pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.794355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c737fc-a5d8-4dde-9040-d6ff30a37557-operator-scripts\") pod \"placement-db-create-b8j8z\" (UID: \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\") " pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.794748 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwxjl\" (UniqueName: \"kubernetes.io/projected/b6c737fc-a5d8-4dde-9040-d6ff30a37557-kube-api-access-dwxjl\") pod \"placement-db-create-b8j8z\" (UID: \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\") " pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.829550 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.898190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c737fc-a5d8-4dde-9040-d6ff30a37557-operator-scripts\") pod \"placement-db-create-b8j8z\" (UID: \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\") " pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.899192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c737fc-a5d8-4dde-9040-d6ff30a37557-operator-scripts\") pod \"placement-db-create-b8j8z\" (UID: \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\") " pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.899785 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwxjl\" (UniqueName: \"kubernetes.io/projected/b6c737fc-a5d8-4dde-9040-d6ff30a37557-kube-api-access-dwxjl\") pod \"placement-db-create-b8j8z\" (UID: \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\") " pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.911101 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6bf5-account-create-update-8pj4j"] Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.912725 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.917479 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.923899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwxjl\" (UniqueName: \"kubernetes.io/projected/b6c737fc-a5d8-4dde-9040-d6ff30a37557-kube-api-access-dwxjl\") pod \"placement-db-create-b8j8z\" (UID: \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\") " pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:26 crc kubenswrapper[4727]: I1210 14:55:26.943485 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bf5-account-create-update-8pj4j"] Dec 10 14:55:27 crc kubenswrapper[4727]: I1210 14:55:27.086458 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:27 crc kubenswrapper[4727]: I1210 14:55:27.155606 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vstmj\" (UniqueName: \"kubernetes.io/projected/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-kube-api-access-vstmj\") pod \"placement-6bf5-account-create-update-8pj4j\" (UID: \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\") " pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:27 crc kubenswrapper[4727]: I1210 14:55:27.155804 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-operator-scripts\") pod \"placement-6bf5-account-create-update-8pj4j\" (UID: \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\") " pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:27 crc kubenswrapper[4727]: I1210 14:55:27.261481 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-operator-scripts\") pod \"placement-6bf5-account-create-update-8pj4j\" (UID: \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\") " pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:27 crc kubenswrapper[4727]: I1210 14:55:27.261635 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vstmj\" (UniqueName: \"kubernetes.io/projected/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-kube-api-access-vstmj\") pod \"placement-6bf5-account-create-update-8pj4j\" (UID: \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\") " pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:27 crc kubenswrapper[4727]: I1210 14:55:27.262298 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-operator-scripts\") pod \"placement-6bf5-account-create-update-8pj4j\" (UID: \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\") " pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:27 crc kubenswrapper[4727]: I1210 14:55:27.293710 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vstmj\" (UniqueName: \"kubernetes.io/projected/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-kube-api-access-vstmj\") pod \"placement-6bf5-account-create-update-8pj4j\" (UID: \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\") " pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:27 crc kubenswrapper[4727]: I1210 14:55:27.362245 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:28 crc kubenswrapper[4727]: I1210 14:55:28.759349 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 10 14:55:28 crc kubenswrapper[4727]: I1210 14:55:28.950108 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mplp7"] Dec 10 14:55:28 crc kubenswrapper[4727]: I1210 14:55:28.950434 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" podUID="6139edab-f197-4da9-88c6-c8625947ab08" containerName="dnsmasq-dns" containerID="cri-o://52815581f412a639d29f38fc4d2de1a40eac910e17a4cfb2379e66a2a5de8674" gracePeriod=10 Dec 10 14:55:28 crc kubenswrapper[4727]: I1210 14:55:28.952354 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.002979 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-cvb6b"] Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.005330 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.055984 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cvb6b"] Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.209426 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-config\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.209539 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.209607 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcc9v\" (UniqueName: \"kubernetes.io/projected/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-kube-api-access-jcc9v\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.209880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-dns-svc\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.209979 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.313154 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-dns-svc\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.313450 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.313578 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.313608 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-config\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.313642 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcc9v\" (UniqueName: \"kubernetes.io/projected/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-kube-api-access-jcc9v\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.314664 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.314847 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.314959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-config\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.315072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-dns-svc\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.345294 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fds6f" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.348158 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcc9v\" (UniqueName: \"kubernetes.io/projected/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-kube-api-access-jcc9v\") pod \"dnsmasq-dns-698758b865-cvb6b\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.366385 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.385157 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.393189 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.516603 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n6m4\" (UniqueName: \"kubernetes.io/projected/7df0cafa-4dc3-445a-8107-1098c218c787-kube-api-access-4n6m4\") pod \"7df0cafa-4dc3-445a-8107-1098c218c787\" (UID: \"7df0cafa-4dc3-445a-8107-1098c218c787\") " Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.516691 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7df0cafa-4dc3-445a-8107-1098c218c787-operator-scripts\") pod \"7df0cafa-4dc3-445a-8107-1098c218c787\" (UID: \"7df0cafa-4dc3-445a-8107-1098c218c787\") " Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.516828 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-operator-scripts\") pod \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\" (UID: \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\") " Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.517068 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt82m\" (UniqueName: \"kubernetes.io/projected/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-kube-api-access-zt82m\") pod \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\" (UID: \"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6\") " Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.518330 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df0cafa-4dc3-445a-8107-1098c218c787-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7df0cafa-4dc3-445a-8107-1098c218c787" (UID: "7df0cafa-4dc3-445a-8107-1098c218c787"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.520402 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6" (UID: "9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.524050 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-kube-api-access-zt82m" (OuterVolumeSpecName: "kube-api-access-zt82m") pod "9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6" (UID: "9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6"). InnerVolumeSpecName "kube-api-access-zt82m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.528809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df0cafa-4dc3-445a-8107-1098c218c787-kube-api-access-4n6m4" (OuterVolumeSpecName: "kube-api-access-4n6m4") pod "7df0cafa-4dc3-445a-8107-1098c218c787" (UID: "7df0cafa-4dc3-445a-8107-1098c218c787"). InnerVolumeSpecName "kube-api-access-4n6m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.621288 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n6m4\" (UniqueName: \"kubernetes.io/projected/7df0cafa-4dc3-445a-8107-1098c218c787-kube-api-access-4n6m4\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.621327 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7df0cafa-4dc3-445a-8107-1098c218c787-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.621335 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.621345 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt82m\" (UniqueName: \"kubernetes.io/projected/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6-kube-api-access-zt82m\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.690192 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 10 14:55:29 crc kubenswrapper[4727]: E1210 14:55:29.693814 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df0cafa-4dc3-445a-8107-1098c218c787" containerName="mariadb-database-create" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.693839 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df0cafa-4dc3-445a-8107-1098c218c787" containerName="mariadb-database-create" Dec 10 14:55:29 crc kubenswrapper[4727]: E1210 14:55:29.693857 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6" containerName="mariadb-account-create-update" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.693864 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6" containerName="mariadb-account-create-update" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.694082 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6" containerName="mariadb-account-create-update" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.694106 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df0cafa-4dc3-445a-8107-1098c218c787" containerName="mariadb-database-create" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.695562 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.699274 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.699291 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.699772 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6hzvf" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.702783 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.718142 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.727224 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec242832-714a-4cb7-9bdc-c88b5336c201-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.727300 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec242832-714a-4cb7-9bdc-c88b5336c201-scripts\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.727357 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec242832-714a-4cb7-9bdc-c88b5336c201-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.727377 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec242832-714a-4cb7-9bdc-c88b5336c201-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.727444 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec242832-714a-4cb7-9bdc-c88b5336c201-config\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.727507 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec242832-714a-4cb7-9bdc-c88b5336c201-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.727652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k9pb\" (UniqueName: \"kubernetes.io/projected/ec242832-714a-4cb7-9bdc-c88b5336c201-kube-api-access-7k9pb\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.830520 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec242832-714a-4cb7-9bdc-c88b5336c201-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.830609 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec242832-714a-4cb7-9bdc-c88b5336c201-scripts\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.830672 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec242832-714a-4cb7-9bdc-c88b5336c201-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.830708 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec242832-714a-4cb7-9bdc-c88b5336c201-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.830769 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec242832-714a-4cb7-9bdc-c88b5336c201-config\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.830826 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec242832-714a-4cb7-9bdc-c88b5336c201-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.830956 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k9pb\" (UniqueName: \"kubernetes.io/projected/ec242832-714a-4cb7-9bdc-c88b5336c201-kube-api-access-7k9pb\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.833432 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec242832-714a-4cb7-9bdc-c88b5336c201-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.833722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec242832-714a-4cb7-9bdc-c88b5336c201-scripts\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.833767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec242832-714a-4cb7-9bdc-c88b5336c201-config\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.836281 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec242832-714a-4cb7-9bdc-c88b5336c201-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.838472 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec242832-714a-4cb7-9bdc-c88b5336c201-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.847508 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.860113 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec242832-714a-4cb7-9bdc-c88b5336c201-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.873892 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k9pb\" (UniqueName: \"kubernetes.io/projected/ec242832-714a-4cb7-9bdc-c88b5336c201-kube-api-access-7k9pb\") pod \"ovn-northd-0\" (UID: \"ec242832-714a-4cb7-9bdc-c88b5336c201\") " pod="openstack/ovn-northd-0" Dec 10 14:55:29 crc kubenswrapper[4727]: I1210 14:55:29.978279 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.010335 4727 generic.go:334] "Generic (PLEG): container finished" podID="6139edab-f197-4da9-88c6-c8625947ab08" containerID="52815581f412a639d29f38fc4d2de1a40eac910e17a4cfb2379e66a2a5de8674" exitCode=0 Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.010452 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" event={"ID":"6139edab-f197-4da9-88c6-c8625947ab08","Type":"ContainerDied","Data":"52815581f412a639d29f38fc4d2de1a40eac910e17a4cfb2379e66a2a5de8674"} Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.010494 4727 scope.go:117] "RemoveContainer" containerID="52815581f412a639d29f38fc4d2de1a40eac910e17a4cfb2379e66a2a5de8674" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.029948 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.046198 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-config\") pod \"6139edab-f197-4da9-88c6-c8625947ab08\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.046260 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-dns-svc\") pod \"6139edab-f197-4da9-88c6-c8625947ab08\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.046323 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-nb\") pod \"6139edab-f197-4da9-88c6-c8625947ab08\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.046602 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qzwg\" (UniqueName: \"kubernetes.io/projected/6139edab-f197-4da9-88c6-c8625947ab08-kube-api-access-9qzwg\") pod \"6139edab-f197-4da9-88c6-c8625947ab08\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.046628 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-sb\") pod \"6139edab-f197-4da9-88c6-c8625947ab08\" (UID: \"6139edab-f197-4da9-88c6-c8625947ab08\") " Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.052358 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a77b-account-create-update-zhrh8" event={"ID":"9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6","Type":"ContainerDied","Data":"439b73c4a95a4290a9703e69026e808eff3535fd96304a45e49ff4ce7e13cfc8"} Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.052485 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="439b73c4a95a4290a9703e69026e808eff3535fd96304a45e49ff4ce7e13cfc8" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.052590 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a77b-account-create-update-zhrh8" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.063217 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fds6f" event={"ID":"7df0cafa-4dc3-445a-8107-1098c218c787","Type":"ContainerDied","Data":"151a1dca145384f63750aeed6ecce37f4e9ee1ba1c1790c35360a07d339884d4"} Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.063262 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="151a1dca145384f63750aeed6ecce37f4e9ee1ba1c1790c35360a07d339884d4" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.063346 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fds6f" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.093650 4727 scope.go:117] "RemoveContainer" containerID="88b163c24328359cd22c95ce585d7ad8aaee4f5b5ee573727dcfe5a55a3e652c" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.103271 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6139edab-f197-4da9-88c6-c8625947ab08-kube-api-access-9qzwg" (OuterVolumeSpecName: "kube-api-access-9qzwg") pod "6139edab-f197-4da9-88c6-c8625947ab08" (UID: "6139edab-f197-4da9-88c6-c8625947ab08"). InnerVolumeSpecName "kube-api-access-9qzwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.114794 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.118171 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 10 14:55:30 crc kubenswrapper[4727]: E1210 14:55:30.118689 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6139edab-f197-4da9-88c6-c8625947ab08" containerName="dnsmasq-dns" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.118710 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6139edab-f197-4da9-88c6-c8625947ab08" containerName="dnsmasq-dns" Dec 10 14:55:30 crc kubenswrapper[4727]: E1210 14:55:30.118747 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6139edab-f197-4da9-88c6-c8625947ab08" containerName="init" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.118755 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6139edab-f197-4da9-88c6-c8625947ab08" containerName="init" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.119030 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6139edab-f197-4da9-88c6-c8625947ab08" containerName="dnsmasq-dns" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.148700 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.153257 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.153400 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.153501 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.153401 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.156023 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qzwg\" (UniqueName: \"kubernetes.io/projected/6139edab-f197-4da9-88c6-c8625947ab08-kube-api-access-9qzwg\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.157298 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4jkbl" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.176709 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6139edab-f197-4da9-88c6-c8625947ab08" (UID: "6139edab-f197-4da9-88c6-c8625947ab08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.185187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6139edab-f197-4da9-88c6-c8625947ab08" (UID: "6139edab-f197-4da9-88c6-c8625947ab08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.186033 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6139edab-f197-4da9-88c6-c8625947ab08" (UID: "6139edab-f197-4da9-88c6-c8625947ab08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.260259 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eb9c4ab-9a8b-4935-af94-5e072bf61202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eb9c4ab-9a8b-4935-af94-5e072bf61202\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.260647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.260791 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5fv7\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-kube-api-access-f5fv7\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.261080 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2b2c88bb-9134-46aa-8595-4762fca3fb57-lock\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.261266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2b2c88bb-9134-46aa-8595-4762fca3fb57-cache\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.261417 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.261430 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.261438 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.288490 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-config" (OuterVolumeSpecName: "config") pod "6139edab-f197-4da9-88c6-c8625947ab08" (UID: "6139edab-f197-4da9-88c6-c8625947ab08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.365507 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2b2c88bb-9134-46aa-8595-4762fca3fb57-cache\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.363414 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2b2c88bb-9134-46aa-8595-4762fca3fb57-cache\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.377991 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0eb9c4ab-9a8b-4935-af94-5e072bf61202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eb9c4ab-9a8b-4935-af94-5e072bf61202\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.378027 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.378113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5fv7\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-kube-api-access-f5fv7\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.378236 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2b2c88bb-9134-46aa-8595-4762fca3fb57-lock\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.378408 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139edab-f197-4da9-88c6-c8625947ab08-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.378807 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2b2c88bb-9134-46aa-8595-4762fca3fb57-lock\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: E1210 14:55:30.379416 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:30 crc kubenswrapper[4727]: E1210 14:55:30.379441 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:30 crc kubenswrapper[4727]: E1210 14:55:30.379551 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift podName:2b2c88bb-9134-46aa-8595-4762fca3fb57 nodeName:}" failed. No retries permitted until 2025-12-10 14:55:30.879522728 +0000 UTC m=+1435.074297270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift") pod "swift-storage-0" (UID: "2b2c88bb-9134-46aa-8595-4762fca3fb57") : configmap "swift-ring-files" not found Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.409404 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5fv7\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-kube-api-access-f5fv7\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.411471 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.411600 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0eb9c4ab-9a8b-4935-af94-5e072bf61202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eb9c4ab-9a8b-4935-af94-5e072bf61202\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/decf76d51de543235000c9da85a9ed0174717a36e88dfd397e170b22e5824a6e/globalmount\"" pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.493761 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0eb9c4ab-9a8b-4935-af94-5e072bf61202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eb9c4ab-9a8b-4935-af94-5e072bf61202\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.619251 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fpmqv"] Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.711868 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-r4x4n"] Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.729298 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.733808 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.736560 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.745480 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-r4x4n"] Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.751476 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.783854 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-0fdf-account-create-update-2tlkr"] Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.802036 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-swiftconf\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.802096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-dispersionconf\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.802125 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-combined-ca-bundle\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.802174 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-scripts\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.802311 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-ring-data-devices\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.802342 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/56fa8f94-70a1-47ce-85c5-947e889ba79c-etc-swift\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.802373 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj65f\" (UniqueName: \"kubernetes.io/projected/56fa8f94-70a1-47ce-85c5-947e889ba79c-kube-api-access-lj65f\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.904488 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-dispersionconf\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.904870 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-combined-ca-bundle\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.904965 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-scripts\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.905079 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.905185 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-ring-data-devices\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.905240 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/56fa8f94-70a1-47ce-85c5-947e889ba79c-etc-swift\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.905278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj65f\" (UniqueName: \"kubernetes.io/projected/56fa8f94-70a1-47ce-85c5-947e889ba79c-kube-api-access-lj65f\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.905395 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-swiftconf\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: E1210 14:55:30.905694 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:30 crc kubenswrapper[4727]: E1210 14:55:30.905712 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:30 crc kubenswrapper[4727]: E1210 14:55:30.905766 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift podName:2b2c88bb-9134-46aa-8595-4762fca3fb57 nodeName:}" failed. No retries permitted until 2025-12-10 14:55:31.905748958 +0000 UTC m=+1436.100523500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift") pod "swift-storage-0" (UID: "2b2c88bb-9134-46aa-8595-4762fca3fb57") : configmap "swift-ring-files" not found Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.906600 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-ring-data-devices\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.906631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-scripts\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.907784 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/56fa8f94-70a1-47ce-85c5-947e889ba79c-etc-swift\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.912607 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-combined-ca-bundle\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.913758 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-swiftconf\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.914176 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-dispersionconf\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:30 crc kubenswrapper[4727]: I1210 14:55:30.924234 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj65f\" (UniqueName: \"kubernetes.io/projected/56fa8f94-70a1-47ce-85c5-947e889ba79c-kube-api-access-lj65f\") pod \"swift-ring-rebalance-r4x4n\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.076165 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerStarted","Data":"b3f7d4a4588cb831b89c9cd6668b9e17e95bf5ae78c88c6ebc35ec6e0f065e3e"} Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.078296 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" event={"ID":"945edb14-70e4-40c5-a208-f14443517e42","Type":"ContainerStarted","Data":"04b5df8b47e87b1d98bcfb24516d1ce7fd97b33ff87765ed41c3b09a20566008"} Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.079657 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fpmqv" event={"ID":"506cfe4b-7b71-418d-bba3-0e534380eea8","Type":"ContainerStarted","Data":"a849d6a3b5215079a43576a940700cc3adc99cce1791d9cb7ea2edac1fb61047"} Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.084648 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" event={"ID":"6139edab-f197-4da9-88c6-c8625947ab08","Type":"ContainerDied","Data":"1b77fd2a303ae5cce2b03c1e20768494dcf64f035828007ef7ddc858f0929d2e"} Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.084715 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mplp7" Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.098511 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.116176 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mplp7"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.125540 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mplp7"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.376763 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.424754 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.439198 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.615393 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bf5-account-create-update-8pj4j"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.637937 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-05f6-account-create-update-48sht"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.652628 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-n2gx8"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.672298 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ebc3-account-create-update-cr7fz"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.696350 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cvb6b"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.716929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-cv5tz"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.744346 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b8j8z"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.768205 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-03a2-account-create-update-p7wsc"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.783976 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ndq58"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.797991 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ztlsl"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.809987 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-224d-account-create-update-897d4"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.816972 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 10 14:55:31 crc kubenswrapper[4727]: I1210 14:55:31.948377 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:31 crc kubenswrapper[4727]: E1210 14:55:31.948611 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:31 crc kubenswrapper[4727]: E1210 14:55:31.948633 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:31 crc kubenswrapper[4727]: E1210 14:55:31.948689 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift podName:2b2c88bb-9134-46aa-8595-4762fca3fb57 nodeName:}" failed. No retries permitted until 2025-12-10 14:55:33.948671709 +0000 UTC m=+1438.143446251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift") pod "swift-storage-0" (UID: "2b2c88bb-9134-46aa-8595-4762fca3fb57") : configmap "swift-ring-files" not found Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.371485 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-zb5mz"] Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.373435 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.378218 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.378825 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zq7zd" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.389825 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zb5mz"] Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.459792 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-combined-ca-bundle\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.459842 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-config-data\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.459926 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-db-sync-config-data\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.459968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ztr\" (UniqueName: \"kubernetes.io/projected/20c3a0fe-e0f7-4f79-ae22-d143511424e9-kube-api-access-47ztr\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.562211 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-combined-ca-bundle\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.562505 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-config-data\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.562546 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-db-sync-config-data\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.562578 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47ztr\" (UniqueName: \"kubernetes.io/projected/20c3a0fe-e0f7-4f79-ae22-d143511424e9-kube-api-access-47ztr\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.569927 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-db-sync-config-data\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.573088 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-combined-ca-bundle\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.575555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-config-data\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.597125 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47ztr\" (UniqueName: \"kubernetes.io/projected/20c3a0fe-e0f7-4f79-ae22-d143511424e9-kube-api-access-47ztr\") pod \"glance-db-sync-zb5mz\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.601045 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6139edab-f197-4da9-88c6-c8625947ab08" path="/var/lib/kubelet/pods/6139edab-f197-4da9-88c6-c8625947ab08/volumes" Dec 10 14:55:32 crc kubenswrapper[4727]: W1210 14:55:32.603522 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod577fad75_56b7_4ad0_89c9_b44f0c771ef7.slice/crio-99e35281bc3d00699ddc1fa566de24e5d9e0dc436110eda5c37bbd68f9eec080 WatchSource:0}: Error finding container 99e35281bc3d00699ddc1fa566de24e5d9e0dc436110eda5c37bbd68f9eec080: Status 404 returned error can't find the container with id 99e35281bc3d00699ddc1fa566de24e5d9e0dc436110eda5c37bbd68f9eec080 Dec 10 14:55:32 crc kubenswrapper[4727]: W1210 14:55:32.609145 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8712c0b_547f_4dda_83f9_bc4d5b9063e8.slice/crio-cef4f3bd8b5ac6194c36008f2094ef2441458992fed02e188f44685273733895 WatchSource:0}: Error finding container cef4f3bd8b5ac6194c36008f2094ef2441458992fed02e188f44685273733895: Status 404 returned error can't find the container with id cef4f3bd8b5ac6194c36008f2094ef2441458992fed02e188f44685273733895 Dec 10 14:55:32 crc kubenswrapper[4727]: W1210 14:55:32.612803 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8b7b4d6_d45b_40ce_80db_772552dfa8e0.slice/crio-436695ec59c3dced0ab07df3d040646a07877f0c72e582df1976f97a2a3ceefe WatchSource:0}: Error finding container 436695ec59c3dced0ab07df3d040646a07877f0c72e582df1976f97a2a3ceefe: Status 404 returned error can't find the container with id 436695ec59c3dced0ab07df3d040646a07877f0c72e582df1976f97a2a3ceefe Dec 10 14:55:32 crc kubenswrapper[4727]: W1210 14:55:32.625667 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9724e12_6c3d_48c9_b783_13520354dda1.slice/crio-c1034d449aa82a57ad16934ceecf6435b8bfa81983d5e96a17b06d25ede13ea9 WatchSource:0}: Error finding container c1034d449aa82a57ad16934ceecf6435b8bfa81983d5e96a17b06d25ede13ea9: Status 404 returned error can't find the container with id c1034d449aa82a57ad16934ceecf6435b8bfa81983d5e96a17b06d25ede13ea9 Dec 10 14:55:32 crc kubenswrapper[4727]: I1210 14:55:32.699671 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zb5mz" Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.122860 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-cv5tz" event={"ID":"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c","Type":"ContainerStarted","Data":"db29b5972280212dd26e16945024f51684fb2294cca2bdb5b184f3d6abda9806"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.127270 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cvb6b" event={"ID":"3ba9cb5c-65f9-4733-a32c-018aa65c9a40","Type":"ContainerStarted","Data":"898aa4ef284ed39908f384d484761df0a37b19c49316728e6b53f3cb5ca097ed"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.129681 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ebc3-account-create-update-cr7fz" event={"ID":"56bec11f-546b-44e3-9fbe-11468e08ebca","Type":"ContainerStarted","Data":"8e4e320f1107769c97435d0be6e565a2558e1f2f53133cb55356c89ac5fac7bb"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.135433 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-224d-account-create-update-897d4" event={"ID":"e9724e12-6c3d-48c9-b783-13520354dda1","Type":"ContainerStarted","Data":"c1034d449aa82a57ad16934ceecf6435b8bfa81983d5e96a17b06d25ede13ea9"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.140727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fpmqv" event={"ID":"506cfe4b-7b71-418d-bba3-0e534380eea8","Type":"ContainerStarted","Data":"6353a233e6b3226d18f3d7528d56f27b2a998ee22361c4eb87105cd934ad33d6"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.142887 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b8j8z" event={"ID":"b6c737fc-a5d8-4dde-9040-d6ff30a37557","Type":"ContainerStarted","Data":"941086c457945b1e62585ba92cc487e8d4397891c428c573391f9003f67bc88b"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.145685 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n2gx8" event={"ID":"402116b0-924d-4dec-aece-9da581a05b83","Type":"ContainerStarted","Data":"797f8797b8c615e506e2a055241d1bf039917a4f5ab8fb757abace986ff74ff8"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.148623 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ztlsl" event={"ID":"c8b7b4d6-d45b-40ce-80db-772552dfa8e0","Type":"ContainerStarted","Data":"436695ec59c3dced0ab07df3d040646a07877f0c72e582df1976f97a2a3ceefe"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.159706 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bf5-account-create-update-8pj4j" event={"ID":"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec","Type":"ContainerStarted","Data":"eaed7134da53bf2b5f1a7559f3bf307bc475d85e2885a729e1cc0a5405beefda"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.161577 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-03a2-account-create-update-p7wsc" event={"ID":"577fad75-56b7-4ad0-89c9-b44f0c771ef7","Type":"ContainerStarted","Data":"99e35281bc3d00699ddc1fa566de24e5d9e0dc436110eda5c37bbd68f9eec080"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.163435 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ndq58" event={"ID":"b8712c0b-547f-4dda-83f9-bc4d5b9063e8","Type":"ContainerStarted","Data":"cef4f3bd8b5ac6194c36008f2094ef2441458992fed02e188f44685273733895"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.169601 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05f6-account-create-update-48sht" event={"ID":"51c0327a-5640-4478-8641-5e495745e5cd","Type":"ContainerStarted","Data":"27602d0d10364ef258c6f0f6a192ebfc4d4a91dddefafebb5a326736f04e3e65"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.179705 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ec242832-714a-4cb7-9bdc-c88b5336c201","Type":"ContainerStarted","Data":"2f6dd44126903c381a321a7d38ae6d156b2455fba9772921041307502e76d17f"} Dec 10 14:55:33 crc kubenswrapper[4727]: I1210 14:55:33.184823 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-r4x4n"] Dec 10 14:55:33 crc kubenswrapper[4727]: W1210 14:55:33.243167 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56fa8f94_70a1_47ce_85c5_947e889ba79c.slice/crio-da5cad65a9566f9216cbbc857dad9abdba41ee4b5ec1477ef808ee412abb1059 WatchSource:0}: Error finding container da5cad65a9566f9216cbbc857dad9abdba41ee4b5ec1477ef808ee412abb1059: Status 404 returned error can't find the container with id da5cad65a9566f9216cbbc857dad9abdba41ee4b5ec1477ef808ee412abb1059 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.005659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:34 crc kubenswrapper[4727]: E1210 14:55:34.005922 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:34 crc kubenswrapper[4727]: E1210 14:55:34.006163 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:34 crc kubenswrapper[4727]: E1210 14:55:34.006244 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift podName:2b2c88bb-9134-46aa-8595-4762fca3fb57 nodeName:}" failed. No retries permitted until 2025-12-10 14:55:38.006224224 +0000 UTC m=+1442.200998766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift") pod "swift-storage-0" (UID: "2b2c88bb-9134-46aa-8595-4762fca3fb57") : configmap "swift-ring-files" not found Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.110823 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zb5mz"] Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.192810 4727 generic.go:334] "Generic (PLEG): container finished" podID="506cfe4b-7b71-418d-bba3-0e534380eea8" containerID="6353a233e6b3226d18f3d7528d56f27b2a998ee22361c4eb87105cd934ad33d6" exitCode=0 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.192882 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fpmqv" event={"ID":"506cfe4b-7b71-418d-bba3-0e534380eea8","Type":"ContainerDied","Data":"6353a233e6b3226d18f3d7528d56f27b2a998ee22361c4eb87105cd934ad33d6"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.196055 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bf5-account-create-update-8pj4j" event={"ID":"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec","Type":"ContainerStarted","Data":"3d78f4eb0703ce61821b347174f137572bf8b16b010926faae7bef03669a90d6"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.199066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-03a2-account-create-update-p7wsc" event={"ID":"577fad75-56b7-4ad0-89c9-b44f0c771ef7","Type":"ContainerStarted","Data":"438b971e05328a4f090a89b9531eca39efe19e1ae34b1cc538d5c87b3ed17266"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.206347 4727 generic.go:334] "Generic (PLEG): container finished" podID="b8712c0b-547f-4dda-83f9-bc4d5b9063e8" containerID="6f5d4d38ed8c6602ea941120bef89509cc0ff1d6ccf9d554ff2d6e31377aeac4" exitCode=0 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.206432 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ndq58" event={"ID":"b8712c0b-547f-4dda-83f9-bc4d5b9063e8","Type":"ContainerDied","Data":"6f5d4d38ed8c6602ea941120bef89509cc0ff1d6ccf9d554ff2d6e31377aeac4"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.217711 4727 generic.go:334] "Generic (PLEG): container finished" podID="8216a031-5caf-4b21-9613-c798dd35dfb7" containerID="3e6a90c7af245bf5101ab8d0a5c85b5879fb7ea4429f8a77715634231728acc3" exitCode=0 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.217758 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8216a031-5caf-4b21-9613-c798dd35dfb7","Type":"ContainerDied","Data":"3e6a90c7af245bf5101ab8d0a5c85b5879fb7ea4429f8a77715634231728acc3"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.220211 4727 generic.go:334] "Generic (PLEG): container finished" podID="945edb14-70e4-40c5-a208-f14443517e42" containerID="5b829f2452808ffc4debe63cdbd3465c47877a424d1cc266c4d973c24511537d" exitCode=0 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.220298 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" event={"ID":"945edb14-70e4-40c5-a208-f14443517e42","Type":"ContainerDied","Data":"5b829f2452808ffc4debe63cdbd3465c47877a424d1cc266c4d973c24511537d"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.222641 4727 generic.go:334] "Generic (PLEG): container finished" podID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerID="fcf847f6a0f1a480d25de92d21a632d373bc462220f8f44546664a31fca49631" exitCode=0 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.222685 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cvb6b" event={"ID":"3ba9cb5c-65f9-4733-a32c-018aa65c9a40","Type":"ContainerDied","Data":"fcf847f6a0f1a480d25de92d21a632d373bc462220f8f44546664a31fca49631"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.229375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-224d-account-create-update-897d4" event={"ID":"e9724e12-6c3d-48c9-b783-13520354dda1","Type":"ContainerStarted","Data":"edea96d83b8d3e45d3a3b78f2ba974943b432df3fa330fb1359e7b18c5ee5ca7"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.232738 4727 generic.go:334] "Generic (PLEG): container finished" podID="b6c737fc-a5d8-4dde-9040-d6ff30a37557" containerID="afaf487934120d553e625c542c6f4868fa3756969ffddaaa44150ac64124da74" exitCode=0 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.232796 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b8j8z" event={"ID":"b6c737fc-a5d8-4dde-9040-d6ff30a37557","Type":"ContainerDied","Data":"afaf487934120d553e625c542c6f4868fa3756969ffddaaa44150ac64124da74"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.236137 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-r4x4n" event={"ID":"56fa8f94-70a1-47ce-85c5-947e889ba79c","Type":"ContainerStarted","Data":"da5cad65a9566f9216cbbc857dad9abdba41ee4b5ec1477ef808ee412abb1059"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.242891 4727 generic.go:334] "Generic (PLEG): container finished" podID="402116b0-924d-4dec-aece-9da581a05b83" containerID="713cb248995a640c1cc2d8e6138dec9ce2a9eb9ee51c25b790b301a2fd5dea66" exitCode=0 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.242971 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n2gx8" event={"ID":"402116b0-924d-4dec-aece-9da581a05b83","Type":"ContainerDied","Data":"713cb248995a640c1cc2d8e6138dec9ce2a9eb9ee51c25b790b301a2fd5dea66"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.248699 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ebc3-account-create-update-cr7fz" event={"ID":"56bec11f-546b-44e3-9fbe-11468e08ebca","Type":"ContainerStarted","Data":"8bda4e32ae9e590de94815bd90216148a2ecbfacfee073d9f1e1d414ae015ffc"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.251734 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05f6-account-create-update-48sht" event={"ID":"51c0327a-5640-4478-8641-5e495745e5cd","Type":"ContainerStarted","Data":"c1cc9183af7b8b0308207008c8fd76447cbef1da21c4290678387239c3036349"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.254365 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6bf5-account-create-update-8pj4j" podStartSLOduration=8.254349851 podStartE2EDuration="8.254349851s" podCreationTimestamp="2025-12-10 14:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:34.249081538 +0000 UTC m=+1438.443856090" watchObservedRunningTime="2025-12-10 14:55:34.254349851 +0000 UTC m=+1438.449124393" Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.255859 4727 generic.go:334] "Generic (PLEG): container finished" podID="8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c" containerID="c98a3d2ef185b37b4effba5cb660dc3825850534f0667eda304a4bd5bb8f49be" exitCode=0 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.255951 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-cv5tz" event={"ID":"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c","Type":"ContainerDied","Data":"c98a3d2ef185b37b4effba5cb660dc3825850534f0667eda304a4bd5bb8f49be"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.267252 4727 generic.go:334] "Generic (PLEG): container finished" podID="c8b7b4d6-d45b-40ce-80db-772552dfa8e0" containerID="c53be6facd652b15459f50cf4387ec1e394253f718fb7c436e5b98f93f545e4d" exitCode=0 Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.267306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ztlsl" event={"ID":"c8b7b4d6-d45b-40ce-80db-772552dfa8e0","Type":"ContainerDied","Data":"c53be6facd652b15459f50cf4387ec1e394253f718fb7c436e5b98f93f545e4d"} Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.275808 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-03a2-account-create-update-p7wsc" podStartSLOduration=9.275787403 podStartE2EDuration="9.275787403s" podCreationTimestamp="2025-12-10 14:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:34.263365499 +0000 UTC m=+1438.458140041" watchObservedRunningTime="2025-12-10 14:55:34.275787403 +0000 UTC m=+1438.470561945" Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.419733 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-ebc3-account-create-update-cr7fz" podStartSLOduration=8.419715908 podStartE2EDuration="8.419715908s" podCreationTimestamp="2025-12-10 14:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:34.381034051 +0000 UTC m=+1438.575808593" watchObservedRunningTime="2025-12-10 14:55:34.419715908 +0000 UTC m=+1438.614490450" Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.428519 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-05f6-account-create-update-48sht" podStartSLOduration=9.428496499 podStartE2EDuration="9.428496499s" podCreationTimestamp="2025-12-10 14:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:34.395369663 +0000 UTC m=+1438.590144205" watchObservedRunningTime="2025-12-10 14:55:34.428496499 +0000 UTC m=+1438.623271041" Dec 10 14:55:34 crc kubenswrapper[4727]: I1210 14:55:34.443843 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-224d-account-create-update-897d4" podStartSLOduration=10.443817255999999 podStartE2EDuration="10.443817256s" podCreationTimestamp="2025-12-10 14:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:34.409223503 +0000 UTC m=+1438.603998065" watchObservedRunningTime="2025-12-10 14:55:34.443817256 +0000 UTC m=+1438.638591798" Dec 10 14:55:34 crc kubenswrapper[4727]: W1210 14:55:34.848487 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20c3a0fe_e0f7_4f79_ae22_d143511424e9.slice/crio-4a443908af31711f482cfdcb573093bdbe455bbc26420e7b222f1ee2cf498cac WatchSource:0}: Error finding container 4a443908af31711f482cfdcb573093bdbe455bbc26420e7b222f1ee2cf498cac: Status 404 returned error can't find the container with id 4a443908af31711f482cfdcb573093bdbe455bbc26420e7b222f1ee2cf498cac Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.280870 4727 generic.go:334] "Generic (PLEG): container finished" podID="577fad75-56b7-4ad0-89c9-b44f0c771ef7" containerID="438b971e05328a4f090a89b9531eca39efe19e1ae34b1cc538d5c87b3ed17266" exitCode=0 Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.280966 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-03a2-account-create-update-p7wsc" event={"ID":"577fad75-56b7-4ad0-89c9-b44f0c771ef7","Type":"ContainerDied","Data":"438b971e05328a4f090a89b9531eca39efe19e1ae34b1cc538d5c87b3ed17266"} Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.285277 4727 generic.go:334] "Generic (PLEG): container finished" podID="51c0327a-5640-4478-8641-5e495745e5cd" containerID="c1cc9183af7b8b0308207008c8fd76447cbef1da21c4290678387239c3036349" exitCode=0 Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.285362 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05f6-account-create-update-48sht" event={"ID":"51c0327a-5640-4478-8641-5e495745e5cd","Type":"ContainerDied","Data":"c1cc9183af7b8b0308207008c8fd76447cbef1da21c4290678387239c3036349"} Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.288806 4727 generic.go:334] "Generic (PLEG): container finished" podID="56bec11f-546b-44e3-9fbe-11468e08ebca" containerID="8bda4e32ae9e590de94815bd90216148a2ecbfacfee073d9f1e1d414ae015ffc" exitCode=0 Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.288882 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ebc3-account-create-update-cr7fz" event={"ID":"56bec11f-546b-44e3-9fbe-11468e08ebca","Type":"ContainerDied","Data":"8bda4e32ae9e590de94815bd90216148a2ecbfacfee073d9f1e1d414ae015ffc"} Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.290749 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9724e12-6c3d-48c9-b783-13520354dda1" containerID="edea96d83b8d3e45d3a3b78f2ba974943b432df3fa330fb1359e7b18c5ee5ca7" exitCode=0 Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.290830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-224d-account-create-update-897d4" event={"ID":"e9724e12-6c3d-48c9-b783-13520354dda1","Type":"ContainerDied","Data":"edea96d83b8d3e45d3a3b78f2ba974943b432df3fa330fb1359e7b18c5ee5ca7"} Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.292044 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zb5mz" event={"ID":"20c3a0fe-e0f7-4f79-ae22-d143511424e9","Type":"ContainerStarted","Data":"4a443908af31711f482cfdcb573093bdbe455bbc26420e7b222f1ee2cf498cac"} Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.293484 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4a907b9-0bc8-44e2-b3cb-3a1e867975ec" containerID="3d78f4eb0703ce61821b347174f137572bf8b16b010926faae7bef03669a90d6" exitCode=0 Dec 10 14:55:35 crc kubenswrapper[4727]: I1210 14:55:35.293527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bf5-account-create-update-8pj4j" event={"ID":"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec","Type":"ContainerDied","Data":"3d78f4eb0703ce61821b347174f137572bf8b16b010926faae7bef03669a90d6"} Dec 10 14:55:37 crc kubenswrapper[4727]: I1210 14:55:37.322409 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerStarted","Data":"6dd467180182be7b79757d8d468cd20ec6607a5b859cb7c54be409e81b686d72"} Dec 10 14:55:37 crc kubenswrapper[4727]: I1210 14:55:37.728009 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:55:37 crc kubenswrapper[4727]: I1210 14:55:37.728243 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:55:37 crc kubenswrapper[4727]: I1210 14:55:37.892500 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:37 crc kubenswrapper[4727]: I1210 14:55:37.930596 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:37 crc kubenswrapper[4727]: I1210 14:55:37.963760 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:37 crc kubenswrapper[4727]: I1210 14:55:37.985500 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:37 crc kubenswrapper[4727]: I1210 14:55:37.995633 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.017097 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.021700 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.034168 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.055172 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.061922 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnvt8\" (UniqueName: \"kubernetes.io/projected/51c0327a-5640-4478-8641-5e495745e5cd-kube-api-access-jnvt8\") pod \"51c0327a-5640-4478-8641-5e495745e5cd\" (UID: \"51c0327a-5640-4478-8641-5e495745e5cd\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.061958 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p4tq\" (UniqueName: \"kubernetes.io/projected/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-kube-api-access-2p4tq\") pod \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\" (UID: \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.061983 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/402116b0-924d-4dec-aece-9da581a05b83-operator-scripts\") pod \"402116b0-924d-4dec-aece-9da581a05b83\" (UID: \"402116b0-924d-4dec-aece-9da581a05b83\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.062004 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-operator-scripts\") pod \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\" (UID: \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.062031 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-operator-scripts\") pod \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\" (UID: \"c8b7b4d6-d45b-40ce-80db-772552dfa8e0\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.062053 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwxjl\" (UniqueName: \"kubernetes.io/projected/b6c737fc-a5d8-4dde-9040-d6ff30a37557-kube-api-access-dwxjl\") pod \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\" (UID: \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.062079 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb446\" (UniqueName: \"kubernetes.io/projected/56bec11f-546b-44e3-9fbe-11468e08ebca-kube-api-access-cb446\") pod \"56bec11f-546b-44e3-9fbe-11468e08ebca\" (UID: \"56bec11f-546b-44e3-9fbe-11468e08ebca\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.062102 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/945edb14-70e4-40c5-a208-f14443517e42-operator-scripts\") pod \"945edb14-70e4-40c5-a208-f14443517e42\" (UID: \"945edb14-70e4-40c5-a208-f14443517e42\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063249 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8712c0b-547f-4dda-83f9-bc4d5b9063e8" (UID: "b8712c0b-547f-4dda-83f9-bc4d5b9063e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063454 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txgxt\" (UniqueName: \"kubernetes.io/projected/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-kube-api-access-txgxt\") pod \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\" (UID: \"b8712c0b-547f-4dda-83f9-bc4d5b9063e8\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063506 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fqjn\" (UniqueName: \"kubernetes.io/projected/402116b0-924d-4dec-aece-9da581a05b83-kube-api-access-8fqjn\") pod \"402116b0-924d-4dec-aece-9da581a05b83\" (UID: \"402116b0-924d-4dec-aece-9da581a05b83\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063537 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c737fc-a5d8-4dde-9040-d6ff30a37557-operator-scripts\") pod \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\" (UID: \"b6c737fc-a5d8-4dde-9040-d6ff30a37557\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063563 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdhg7\" (UniqueName: \"kubernetes.io/projected/945edb14-70e4-40c5-a208-f14443517e42-kube-api-access-jdhg7\") pod \"945edb14-70e4-40c5-a208-f14443517e42\" (UID: \"945edb14-70e4-40c5-a208-f14443517e42\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063586 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5fjj\" (UniqueName: \"kubernetes.io/projected/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-kube-api-access-h5fjj\") pod \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\" (UID: \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063629 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-operator-scripts\") pod \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\" (UID: \"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063664 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c0327a-5640-4478-8641-5e495745e5cd-operator-scripts\") pod \"51c0327a-5640-4478-8641-5e495745e5cd\" (UID: \"51c0327a-5640-4478-8641-5e495745e5cd\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063688 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56bec11f-546b-44e3-9fbe-11468e08ebca-operator-scripts\") pod \"56bec11f-546b-44e3-9fbe-11468e08ebca\" (UID: \"56bec11f-546b-44e3-9fbe-11468e08ebca\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063727 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm5pf\" (UniqueName: \"kubernetes.io/projected/577fad75-56b7-4ad0-89c9-b44f0c771ef7-kube-api-access-wm5pf\") pod \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\" (UID: \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.063761 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fad75-56b7-4ad0-89c9-b44f0c771ef7-operator-scripts\") pod \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\" (UID: \"577fad75-56b7-4ad0-89c9-b44f0c771ef7\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.064023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:38 crc kubenswrapper[4727]: E1210 14:55:38.064458 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:38 crc kubenswrapper[4727]: E1210 14:55:38.064479 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:38 crc kubenswrapper[4727]: E1210 14:55:38.064533 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift podName:2b2c88bb-9134-46aa-8595-4762fca3fb57 nodeName:}" failed. No retries permitted until 2025-12-10 14:55:46.064509811 +0000 UTC m=+1450.259284353 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift") pod "swift-storage-0" (UID: "2b2c88bb-9134-46aa-8595-4762fca3fb57") : configmap "swift-ring-files" not found Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.064657 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402116b0-924d-4dec-aece-9da581a05b83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "402116b0-924d-4dec-aece-9da581a05b83" (UID: "402116b0-924d-4dec-aece-9da581a05b83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.064885 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8b7b4d6-d45b-40ce-80db-772552dfa8e0" (UID: "c8b7b4d6-d45b-40ce-80db-772552dfa8e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.065027 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/945edb14-70e4-40c5-a208-f14443517e42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "945edb14-70e4-40c5-a208-f14443517e42" (UID: "945edb14-70e4-40c5-a208-f14443517e42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.065520 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56bec11f-546b-44e3-9fbe-11468e08ebca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56bec11f-546b-44e3-9fbe-11468e08ebca" (UID: "56bec11f-546b-44e3-9fbe-11468e08ebca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.066090 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c" (UID: "8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.066534 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c0327a-5640-4478-8641-5e495745e5cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51c0327a-5640-4478-8641-5e495745e5cd" (UID: "51c0327a-5640-4478-8641-5e495745e5cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.067878 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/577fad75-56b7-4ad0-89c9-b44f0c771ef7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "577fad75-56b7-4ad0-89c9-b44f0c771ef7" (UID: "577fad75-56b7-4ad0-89c9-b44f0c771ef7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.068185 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c737fc-a5d8-4dde-9040-d6ff30a37557-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6c737fc-a5d8-4dde-9040-d6ff30a37557" (UID: "b6c737fc-a5d8-4dde-9040-d6ff30a37557"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.068838 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.070084 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-kube-api-access-h5fjj" (OuterVolumeSpecName: "kube-api-access-h5fjj") pod "8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c" (UID: "8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c"). InnerVolumeSpecName "kube-api-access-h5fjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.083234 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-kube-api-access-txgxt" (OuterVolumeSpecName: "kube-api-access-txgxt") pod "b8712c0b-547f-4dda-83f9-bc4d5b9063e8" (UID: "b8712c0b-547f-4dda-83f9-bc4d5b9063e8"). InnerVolumeSpecName "kube-api-access-txgxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.083333 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c0327a-5640-4478-8641-5e495745e5cd-kube-api-access-jnvt8" (OuterVolumeSpecName: "kube-api-access-jnvt8") pod "51c0327a-5640-4478-8641-5e495745e5cd" (UID: "51c0327a-5640-4478-8641-5e495745e5cd"). InnerVolumeSpecName "kube-api-access-jnvt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.083401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c737fc-a5d8-4dde-9040-d6ff30a37557-kube-api-access-dwxjl" (OuterVolumeSpecName: "kube-api-access-dwxjl") pod "b6c737fc-a5d8-4dde-9040-d6ff30a37557" (UID: "b6c737fc-a5d8-4dde-9040-d6ff30a37557"). InnerVolumeSpecName "kube-api-access-dwxjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.083471 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56bec11f-546b-44e3-9fbe-11468e08ebca-kube-api-access-cb446" (OuterVolumeSpecName: "kube-api-access-cb446") pod "56bec11f-546b-44e3-9fbe-11468e08ebca" (UID: "56bec11f-546b-44e3-9fbe-11468e08ebca"). InnerVolumeSpecName "kube-api-access-cb446". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.092170 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-kube-api-access-2p4tq" (OuterVolumeSpecName: "kube-api-access-2p4tq") pod "c8b7b4d6-d45b-40ce-80db-772552dfa8e0" (UID: "c8b7b4d6-d45b-40ce-80db-772552dfa8e0"). InnerVolumeSpecName "kube-api-access-2p4tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.093708 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/577fad75-56b7-4ad0-89c9-b44f0c771ef7-kube-api-access-wm5pf" (OuterVolumeSpecName: "kube-api-access-wm5pf") pod "577fad75-56b7-4ad0-89c9-b44f0c771ef7" (UID: "577fad75-56b7-4ad0-89c9-b44f0c771ef7"). InnerVolumeSpecName "kube-api-access-wm5pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.096430 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402116b0-924d-4dec-aece-9da581a05b83-kube-api-access-8fqjn" (OuterVolumeSpecName: "kube-api-access-8fqjn") pod "402116b0-924d-4dec-aece-9da581a05b83" (UID: "402116b0-924d-4dec-aece-9da581a05b83"). InnerVolumeSpecName "kube-api-access-8fqjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.097362 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945edb14-70e4-40c5-a208-f14443517e42-kube-api-access-jdhg7" (OuterVolumeSpecName: "kube-api-access-jdhg7") pod "945edb14-70e4-40c5-a208-f14443517e42" (UID: "945edb14-70e4-40c5-a208-f14443517e42"). InnerVolumeSpecName "kube-api-access-jdhg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.129451 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.159164 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.167413 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vstmj\" (UniqueName: \"kubernetes.io/projected/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-kube-api-access-vstmj\") pod \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\" (UID: \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.167506 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9724e12-6c3d-48c9-b783-13520354dda1-operator-scripts\") pod \"e9724e12-6c3d-48c9-b783-13520354dda1\" (UID: \"e9724e12-6c3d-48c9-b783-13520354dda1\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.167695 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/506cfe4b-7b71-418d-bba3-0e534380eea8-operator-scripts\") pod \"506cfe4b-7b71-418d-bba3-0e534380eea8\" (UID: \"506cfe4b-7b71-418d-bba3-0e534380eea8\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.167762 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-operator-scripts\") pod \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\" (UID: \"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.167892 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5p92\" (UniqueName: \"kubernetes.io/projected/506cfe4b-7b71-418d-bba3-0e534380eea8-kube-api-access-t5p92\") pod \"506cfe4b-7b71-418d-bba3-0e534380eea8\" (UID: \"506cfe4b-7b71-418d-bba3-0e534380eea8\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.168138 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6cvh\" (UniqueName: \"kubernetes.io/projected/e9724e12-6c3d-48c9-b783-13520354dda1-kube-api-access-s6cvh\") pod \"e9724e12-6c3d-48c9-b783-13520354dda1\" (UID: \"e9724e12-6c3d-48c9-b783-13520354dda1\") " Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.168919 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9724e12-6c3d-48c9-b783-13520354dda1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9724e12-6c3d-48c9-b783-13520354dda1" (UID: "e9724e12-6c3d-48c9-b783-13520354dda1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169459 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnvt8\" (UniqueName: \"kubernetes.io/projected/51c0327a-5640-4478-8641-5e495745e5cd-kube-api-access-jnvt8\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169480 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p4tq\" (UniqueName: \"kubernetes.io/projected/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-kube-api-access-2p4tq\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169492 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/402116b0-924d-4dec-aece-9da581a05b83-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169506 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169515 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b7b4d6-d45b-40ce-80db-772552dfa8e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169539 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwxjl\" (UniqueName: \"kubernetes.io/projected/b6c737fc-a5d8-4dde-9040-d6ff30a37557-kube-api-access-dwxjl\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169548 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb446\" (UniqueName: \"kubernetes.io/projected/56bec11f-546b-44e3-9fbe-11468e08ebca-kube-api-access-cb446\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169560 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/945edb14-70e4-40c5-a208-f14443517e42-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169569 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txgxt\" (UniqueName: \"kubernetes.io/projected/b8712c0b-547f-4dda-83f9-bc4d5b9063e8-kube-api-access-txgxt\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169608 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fqjn\" (UniqueName: \"kubernetes.io/projected/402116b0-924d-4dec-aece-9da581a05b83-kube-api-access-8fqjn\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169606 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4a907b9-0bc8-44e2-b3cb-3a1e867975ec" (UID: "f4a907b9-0bc8-44e2-b3cb-3a1e867975ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169619 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c737fc-a5d8-4dde-9040-d6ff30a37557-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169651 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdhg7\" (UniqueName: \"kubernetes.io/projected/945edb14-70e4-40c5-a208-f14443517e42-kube-api-access-jdhg7\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169674 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5fjj\" (UniqueName: \"kubernetes.io/projected/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-kube-api-access-h5fjj\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169684 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169694 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c0327a-5640-4478-8641-5e495745e5cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169703 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56bec11f-546b-44e3-9fbe-11468e08ebca-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169716 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9724e12-6c3d-48c9-b783-13520354dda1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169734 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm5pf\" (UniqueName: \"kubernetes.io/projected/577fad75-56b7-4ad0-89c9-b44f0c771ef7-kube-api-access-wm5pf\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.169744 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fad75-56b7-4ad0-89c9-b44f0c771ef7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.170566 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/506cfe4b-7b71-418d-bba3-0e534380eea8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "506cfe4b-7b71-418d-bba3-0e534380eea8" (UID: "506cfe4b-7b71-418d-bba3-0e534380eea8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.176747 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506cfe4b-7b71-418d-bba3-0e534380eea8-kube-api-access-t5p92" (OuterVolumeSpecName: "kube-api-access-t5p92") pod "506cfe4b-7b71-418d-bba3-0e534380eea8" (UID: "506cfe4b-7b71-418d-bba3-0e534380eea8"). InnerVolumeSpecName "kube-api-access-t5p92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.187796 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-kube-api-access-vstmj" (OuterVolumeSpecName: "kube-api-access-vstmj") pod "f4a907b9-0bc8-44e2-b3cb-3a1e867975ec" (UID: "f4a907b9-0bc8-44e2-b3cb-3a1e867975ec"). InnerVolumeSpecName "kube-api-access-vstmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.192525 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9724e12-6c3d-48c9-b783-13520354dda1-kube-api-access-s6cvh" (OuterVolumeSpecName: "kube-api-access-s6cvh") pod "e9724e12-6c3d-48c9-b783-13520354dda1" (UID: "e9724e12-6c3d-48c9-b783-13520354dda1"). InnerVolumeSpecName "kube-api-access-s6cvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.273797 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vstmj\" (UniqueName: \"kubernetes.io/projected/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-kube-api-access-vstmj\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.273847 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/506cfe4b-7b71-418d-bba3-0e534380eea8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.273860 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.273873 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5p92\" (UniqueName: \"kubernetes.io/projected/506cfe4b-7b71-418d-bba3-0e534380eea8-kube-api-access-t5p92\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.273884 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6cvh\" (UniqueName: \"kubernetes.io/projected/e9724e12-6c3d-48c9-b783-13520354dda1-kube-api-access-s6cvh\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.333448 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-224d-account-create-update-897d4" event={"ID":"e9724e12-6c3d-48c9-b783-13520354dda1","Type":"ContainerDied","Data":"c1034d449aa82a57ad16934ceecf6435b8bfa81983d5e96a17b06d25ede13ea9"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.333491 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1034d449aa82a57ad16934ceecf6435b8bfa81983d5e96a17b06d25ede13ea9" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.333559 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-224d-account-create-update-897d4" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.337235 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" event={"ID":"945edb14-70e4-40c5-a208-f14443517e42","Type":"ContainerDied","Data":"04b5df8b47e87b1d98bcfb24516d1ce7fd97b33ff87765ed41c3b09a20566008"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.337329 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b5df8b47e87b1d98bcfb24516d1ce7fd97b33ff87765ed41c3b09a20566008" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.337396 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-0fdf-account-create-update-2tlkr" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.341955 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bf5-account-create-update-8pj4j" event={"ID":"f4a907b9-0bc8-44e2-b3cb-3a1e867975ec","Type":"ContainerDied","Data":"eaed7134da53bf2b5f1a7559f3bf307bc475d85e2885a729e1cc0a5405beefda"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.342017 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaed7134da53bf2b5f1a7559f3bf307bc475d85e2885a729e1cc0a5405beefda" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.341978 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bf5-account-create-update-8pj4j" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.344677 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n2gx8" event={"ID":"402116b0-924d-4dec-aece-9da581a05b83","Type":"ContainerDied","Data":"797f8797b8c615e506e2a055241d1bf039917a4f5ab8fb757abace986ff74ff8"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.344720 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797f8797b8c615e506e2a055241d1bf039917a4f5ab8fb757abace986ff74ff8" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.344798 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n2gx8" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.348123 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ztlsl" event={"ID":"c8b7b4d6-d45b-40ce-80db-772552dfa8e0","Type":"ContainerDied","Data":"436695ec59c3dced0ab07df3d040646a07877f0c72e582df1976f97a2a3ceefe"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.348201 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="436695ec59c3dced0ab07df3d040646a07877f0c72e582df1976f97a2a3ceefe" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.348302 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ztlsl" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.353810 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-cv5tz" event={"ID":"8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c","Type":"ContainerDied","Data":"db29b5972280212dd26e16945024f51684fb2294cca2bdb5b184f3d6abda9806"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.353876 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db29b5972280212dd26e16945024f51684fb2294cca2bdb5b184f3d6abda9806" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.353893 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-cv5tz" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.367441 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cvb6b" event={"ID":"3ba9cb5c-65f9-4733-a32c-018aa65c9a40","Type":"ContainerStarted","Data":"a265938aa62626947a4ea3b94cf185a3c9b53622cc6ae04cc492674f48e32caf"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.367528 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.373880 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ebc3-account-create-update-cr7fz" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.373884 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ebc3-account-create-update-cr7fz" event={"ID":"56bec11f-546b-44e3-9fbe-11468e08ebca","Type":"ContainerDied","Data":"8e4e320f1107769c97435d0be6e565a2558e1f2f53133cb55356c89ac5fac7bb"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.373942 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e4e320f1107769c97435d0be6e565a2558e1f2f53133cb55356c89ac5fac7bb" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.376286 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fpmqv" event={"ID":"506cfe4b-7b71-418d-bba3-0e534380eea8","Type":"ContainerDied","Data":"a849d6a3b5215079a43576a940700cc3adc99cce1791d9cb7ea2edac1fb61047"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.376312 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a849d6a3b5215079a43576a940700cc3adc99cce1791d9cb7ea2edac1fb61047" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.376376 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fpmqv" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.380080 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-r4x4n" event={"ID":"56fa8f94-70a1-47ce-85c5-947e889ba79c","Type":"ContainerStarted","Data":"7bd2f45e7d4ed83fe1e23772bdac9e4925340b4b9d40fd630928cdee63678831"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.389337 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ndq58" event={"ID":"b8712c0b-547f-4dda-83f9-bc4d5b9063e8","Type":"ContainerDied","Data":"cef4f3bd8b5ac6194c36008f2094ef2441458992fed02e188f44685273733895"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.389822 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef4f3bd8b5ac6194c36008f2094ef2441458992fed02e188f44685273733895" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.389764 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ndq58" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.396332 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ec242832-714a-4cb7-9bdc-c88b5336c201","Type":"ContainerStarted","Data":"5571bf6b609826d1e175d16c781f7362b0bde3b7212ab346a2581f0eb452103d"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.397081 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-cvb6b" podStartSLOduration=10.39706543 podStartE2EDuration="10.39706543s" podCreationTimestamp="2025-12-10 14:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:38.387772956 +0000 UTC m=+1442.582547498" watchObservedRunningTime="2025-12-10 14:55:38.39706543 +0000 UTC m=+1442.591839972" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.401936 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8216a031-5caf-4b21-9613-c798dd35dfb7","Type":"ContainerStarted","Data":"acbcdaad2623c3eff1e8655b2176c4ebaec9fd19791231fb76d3a812ca13a55c"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.402259 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.414583 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-03a2-account-create-update-p7wsc" event={"ID":"577fad75-56b7-4ad0-89c9-b44f0c771ef7","Type":"ContainerDied","Data":"99e35281bc3d00699ddc1fa566de24e5d9e0dc436110eda5c37bbd68f9eec080"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.414624 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e35281bc3d00699ddc1fa566de24e5d9e0dc436110eda5c37bbd68f9eec080" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.414695 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-03a2-account-create-update-p7wsc" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.424309 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-r4x4n" podStartSLOduration=4.005438595 podStartE2EDuration="8.424284788s" podCreationTimestamp="2025-12-10 14:55:30 +0000 UTC" firstStartedPulling="2025-12-10 14:55:33.253107594 +0000 UTC m=+1437.447882136" lastFinishedPulling="2025-12-10 14:55:37.671953787 +0000 UTC m=+1441.866728329" observedRunningTime="2025-12-10 14:55:38.413431154 +0000 UTC m=+1442.608205706" watchObservedRunningTime="2025-12-10 14:55:38.424284788 +0000 UTC m=+1442.619059330" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.428127 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b8j8z" event={"ID":"b6c737fc-a5d8-4dde-9040-d6ff30a37557","Type":"ContainerDied","Data":"941086c457945b1e62585ba92cc487e8d4397891c428c573391f9003f67bc88b"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.428214 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b8j8z" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.428178 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="941086c457945b1e62585ba92cc487e8d4397891c428c573391f9003f67bc88b" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.434613 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05f6-account-create-update-48sht" event={"ID":"51c0327a-5640-4478-8641-5e495745e5cd","Type":"ContainerDied","Data":"27602d0d10364ef258c6f0f6a192ebfc4d4a91dddefafebb5a326736f04e3e65"} Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.434681 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27602d0d10364ef258c6f0f6a192ebfc4d4a91dddefafebb5a326736f04e3e65" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.434811 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05f6-account-create-update-48sht" Dec 10 14:55:38 crc kubenswrapper[4727]: I1210 14:55:38.444838 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371940.40996 podStartE2EDuration="1m36.444816686s" podCreationTimestamp="2025-12-10 14:54:02 +0000 UTC" firstStartedPulling="2025-12-10 14:54:04.260459648 +0000 UTC m=+1348.455234190" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:38.438581029 +0000 UTC m=+1442.633355571" watchObservedRunningTime="2025-12-10 14:55:38.444816686 +0000 UTC m=+1442.639591228" Dec 10 14:55:38 crc kubenswrapper[4727]: E1210 14:55:38.581787 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod402116b0_924d_4dec_aece_9da581a05b83.slice/crio-797f8797b8c615e506e2a055241d1bf039917a4f5ab8fb757abace986ff74ff8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod402116b0_924d_4dec_aece_9da581a05b83.slice\": RecentStats: unable to find data in memory cache]" Dec 10 14:55:39 crc kubenswrapper[4727]: I1210 14:55:39.447793 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ec242832-714a-4cb7-9bdc-c88b5336c201","Type":"ContainerStarted","Data":"4143a8afa7861267fd2dc1482ce9d19bd71b8b01f40a845bcdfd4547ef0cc4b8"} Dec 10 14:55:39 crc kubenswrapper[4727]: I1210 14:55:39.475372 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.446921505 podStartE2EDuration="10.475350554s" podCreationTimestamp="2025-12-10 14:55:29 +0000 UTC" firstStartedPulling="2025-12-10 14:55:32.634736546 +0000 UTC m=+1436.829511088" lastFinishedPulling="2025-12-10 14:55:37.663165555 +0000 UTC m=+1441.857940137" observedRunningTime="2025-12-10 14:55:39.472871341 +0000 UTC m=+1443.667645883" watchObservedRunningTime="2025-12-10 14:55:39.475350554 +0000 UTC m=+1443.670125096" Dec 10 14:55:40 crc kubenswrapper[4727]: I1210 14:55:40.031519 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 10 14:55:41 crc kubenswrapper[4727]: I1210 14:55:41.387096 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.194591 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.204097 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4tq5b" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.351515 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5phw9"] Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352474 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577fad75-56b7-4ad0-89c9-b44f0c771ef7" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352515 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="577fad75-56b7-4ad0-89c9-b44f0c771ef7" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352534 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c0327a-5640-4478-8641-5e495745e5cd" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352543 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c0327a-5640-4478-8641-5e495745e5cd" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352558 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352566 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352581 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506cfe4b-7b71-418d-bba3-0e534380eea8" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352588 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="506cfe4b-7b71-418d-bba3-0e534380eea8" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352606 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945edb14-70e4-40c5-a208-f14443517e42" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352613 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="945edb14-70e4-40c5-a208-f14443517e42" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352625 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9724e12-6c3d-48c9-b783-13520354dda1" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352631 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9724e12-6c3d-48c9-b783-13520354dda1" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352640 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8712c0b-547f-4dda-83f9-bc4d5b9063e8" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352649 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8712c0b-547f-4dda-83f9-bc4d5b9063e8" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352664 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402116b0-924d-4dec-aece-9da581a05b83" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352670 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="402116b0-924d-4dec-aece-9da581a05b83" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352681 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b7b4d6-d45b-40ce-80db-772552dfa8e0" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352687 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b7b4d6-d45b-40ce-80db-772552dfa8e0" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352695 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a907b9-0bc8-44e2-b3cb-3a1e867975ec" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352702 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a907b9-0bc8-44e2-b3cb-3a1e867975ec" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352714 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56bec11f-546b-44e3-9fbe-11468e08ebca" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352720 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="56bec11f-546b-44e3-9fbe-11468e08ebca" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: E1210 14:55:42.352729 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c737fc-a5d8-4dde-9040-d6ff30a37557" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.352736 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c737fc-a5d8-4dde-9040-d6ff30a37557" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353064 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9724e12-6c3d-48c9-b783-13520354dda1" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353105 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a907b9-0bc8-44e2-b3cb-3a1e867975ec" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353114 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="402116b0-924d-4dec-aece-9da581a05b83" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353129 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b7b4d6-d45b-40ce-80db-772552dfa8e0" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353140 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8712c0b-547f-4dda-83f9-bc4d5b9063e8" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353151 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="945edb14-70e4-40c5-a208-f14443517e42" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353165 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="56bec11f-546b-44e3-9fbe-11468e08ebca" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353178 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="577fad75-56b7-4ad0-89c9-b44f0c771ef7" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353196 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c0327a-5640-4478-8641-5e495745e5cd" containerName="mariadb-account-create-update" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353206 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c737fc-a5d8-4dde-9040-d6ff30a37557" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353218 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="506cfe4b-7b71-418d-bba3-0e534380eea8" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.353232 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c" containerName="mariadb-database-create" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.354185 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.356973 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.356972 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pw9zq" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.357170 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.357275 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.364412 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5phw9"] Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.518447 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bcld\" (UniqueName: \"kubernetes.io/projected/70a282a3-fd71-4fd7-9a0b-7871a930affc-kube-api-access-5bcld\") pod \"keystone-db-sync-5phw9\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.518519 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-combined-ca-bundle\") pod \"keystone-db-sync-5phw9\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.518616 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-config-data\") pod \"keystone-db-sync-5phw9\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.588566 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-flmw8"] Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.590732 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.600418 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flmw8"] Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.622882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-catalog-content\") pod \"redhat-operators-flmw8\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.622969 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bcld\" (UniqueName: \"kubernetes.io/projected/70a282a3-fd71-4fd7-9a0b-7871a930affc-kube-api-access-5bcld\") pod \"keystone-db-sync-5phw9\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.622994 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-combined-ca-bundle\") pod \"keystone-db-sync-5phw9\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.623031 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-utilities\") pod \"redhat-operators-flmw8\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.623094 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-config-data\") pod \"keystone-db-sync-5phw9\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.623128 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmkw\" (UniqueName: \"kubernetes.io/projected/e8436452-f31c-4b3b-9c04-2fddf8668b1e-kube-api-access-gpmkw\") pod \"redhat-operators-flmw8\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.641141 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-config-data\") pod \"keystone-db-sync-5phw9\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.644558 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-combined-ca-bundle\") pod \"keystone-db-sync-5phw9\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.663596 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x2fq6-config-46bk9"] Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.665819 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.668848 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.671291 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bcld\" (UniqueName: \"kubernetes.io/projected/70a282a3-fd71-4fd7-9a0b-7871a930affc-kube-api-access-5bcld\") pod \"keystone-db-sync-5phw9\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.688198 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5phw9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.707373 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x2fq6-config-46bk9"] Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.735984 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-catalog-content\") pod \"redhat-operators-flmw8\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.736062 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzv6z\" (UniqueName: \"kubernetes.io/projected/dd2d4cb1-22c9-4a83-a377-799a48b75298-kube-api-access-wzv6z\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.736102 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run-ovn\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.736133 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-utilities\") pod \"redhat-operators-flmw8\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.736149 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-additional-scripts\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.736236 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-scripts\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.736295 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpmkw\" (UniqueName: \"kubernetes.io/projected/e8436452-f31c-4b3b-9c04-2fddf8668b1e-kube-api-access-gpmkw\") pod \"redhat-operators-flmw8\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.736315 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.736444 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-log-ovn\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.737970 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-catalog-content\") pod \"redhat-operators-flmw8\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.738239 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-utilities\") pod \"redhat-operators-flmw8\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.770716 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpmkw\" (UniqueName: \"kubernetes.io/projected/e8436452-f31c-4b3b-9c04-2fddf8668b1e-kube-api-access-gpmkw\") pod \"redhat-operators-flmw8\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.838809 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-scripts\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.839099 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.839173 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-log-ovn\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.839263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzv6z\" (UniqueName: \"kubernetes.io/projected/dd2d4cb1-22c9-4a83-a377-799a48b75298-kube-api-access-wzv6z\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.839289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run-ovn\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.839323 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-additional-scripts\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.840128 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-log-ovn\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.840207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-additional-scripts\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.840530 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run-ovn\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.840572 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.843139 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-scripts\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.885419 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzv6z\" (UniqueName: \"kubernetes.io/projected/dd2d4cb1-22c9-4a83-a377-799a48b75298-kube-api-access-wzv6z\") pod \"ovn-controller-x2fq6-config-46bk9\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:42 crc kubenswrapper[4727]: I1210 14:55:42.912521 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:55:43 crc kubenswrapper[4727]: I1210 14:55:43.070397 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5phw9"] Dec 10 14:55:43 crc kubenswrapper[4727]: I1210 14:55:43.172061 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:43 crc kubenswrapper[4727]: I1210 14:55:43.528377 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5phw9" event={"ID":"70a282a3-fd71-4fd7-9a0b-7871a930affc","Type":"ContainerStarted","Data":"c81879d4cb3078698c57168f23de60196d73b10d43e4bec9f242739920c35f09"} Dec 10 14:55:43 crc kubenswrapper[4727]: I1210 14:55:43.701319 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flmw8"] Dec 10 14:55:43 crc kubenswrapper[4727]: W1210 14:55:43.713118 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8436452_f31c_4b3b_9c04_2fddf8668b1e.slice/crio-fd26376fbccc283124f74e13a63d0231b1717878cd55424075873a14fd7dfaec WatchSource:0}: Error finding container fd26376fbccc283124f74e13a63d0231b1717878cd55424075873a14fd7dfaec: Status 404 returned error can't find the container with id fd26376fbccc283124f74e13a63d0231b1717878cd55424075873a14fd7dfaec Dec 10 14:55:43 crc kubenswrapper[4727]: W1210 14:55:43.807607 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd2d4cb1_22c9_4a83_a377_799a48b75298.slice/crio-fbc0a9fa6f80e535d596a82ba331c6182e2d271d2959d1f1e53e7943cc12e4db WatchSource:0}: Error finding container fbc0a9fa6f80e535d596a82ba331c6182e2d271d2959d1f1e53e7943cc12e4db: Status 404 returned error can't find the container with id fbc0a9fa6f80e535d596a82ba331c6182e2d271d2959d1f1e53e7943cc12e4db Dec 10 14:55:43 crc kubenswrapper[4727]: I1210 14:55:43.811635 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x2fq6-config-46bk9"] Dec 10 14:55:44 crc kubenswrapper[4727]: I1210 14:55:44.396159 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:55:44 crc kubenswrapper[4727]: I1210 14:55:44.474671 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7d78q"] Dec 10 14:55:44 crc kubenswrapper[4727]: I1210 14:55:44.474945 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" podUID="0d6fded7-3029-48a5-96b8-6f8296acd34c" containerName="dnsmasq-dns" containerID="cri-o://236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23" gracePeriod=10 Dec 10 14:55:44 crc kubenswrapper[4727]: I1210 14:55:44.559404 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2fq6-config-46bk9" event={"ID":"dd2d4cb1-22c9-4a83-a377-799a48b75298","Type":"ContainerStarted","Data":"57f6cbe7e1b94d02d13977056bd83bec17acf47ee00d1a7c49165a2f91a66058"} Dec 10 14:55:44 crc kubenswrapper[4727]: I1210 14:55:44.559458 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2fq6-config-46bk9" event={"ID":"dd2d4cb1-22c9-4a83-a377-799a48b75298","Type":"ContainerStarted","Data":"fbc0a9fa6f80e535d596a82ba331c6182e2d271d2959d1f1e53e7943cc12e4db"} Dec 10 14:55:44 crc kubenswrapper[4727]: I1210 14:55:44.578431 4727 generic.go:334] "Generic (PLEG): container finished" podID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerID="c254cefa609d885f048f3d7acd800e20e378c1ed013856aa15f69e8a417c10f2" exitCode=0 Dec 10 14:55:44 crc kubenswrapper[4727]: I1210 14:55:44.601998 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x2fq6-config-46bk9" podStartSLOduration=2.6019646119999997 podStartE2EDuration="2.601964612s" podCreationTimestamp="2025-12-10 14:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:44.599317915 +0000 UTC m=+1448.794092467" watchObservedRunningTime="2025-12-10 14:55:44.601964612 +0000 UTC m=+1448.796739154" Dec 10 14:55:44 crc kubenswrapper[4727]: I1210 14:55:44.633084 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flmw8" event={"ID":"e8436452-f31c-4b3b-9c04-2fddf8668b1e","Type":"ContainerDied","Data":"c254cefa609d885f048f3d7acd800e20e378c1ed013856aa15f69e8a417c10f2"} Dec 10 14:55:44 crc kubenswrapper[4727]: I1210 14:55:44.633414 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flmw8" event={"ID":"e8436452-f31c-4b3b-9c04-2fddf8668b1e","Type":"ContainerStarted","Data":"fd26376fbccc283124f74e13a63d0231b1717878cd55424075873a14fd7dfaec"} Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.256428 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.358837 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwfmh\" (UniqueName: \"kubernetes.io/projected/0d6fded7-3029-48a5-96b8-6f8296acd34c-kube-api-access-xwfmh\") pod \"0d6fded7-3029-48a5-96b8-6f8296acd34c\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.358991 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-dns-svc\") pod \"0d6fded7-3029-48a5-96b8-6f8296acd34c\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.359038 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-config\") pod \"0d6fded7-3029-48a5-96b8-6f8296acd34c\" (UID: \"0d6fded7-3029-48a5-96b8-6f8296acd34c\") " Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.367166 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6fded7-3029-48a5-96b8-6f8296acd34c-kube-api-access-xwfmh" (OuterVolumeSpecName: "kube-api-access-xwfmh") pod "0d6fded7-3029-48a5-96b8-6f8296acd34c" (UID: "0d6fded7-3029-48a5-96b8-6f8296acd34c"). InnerVolumeSpecName "kube-api-access-xwfmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.407799 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d6fded7-3029-48a5-96b8-6f8296acd34c" (UID: "0d6fded7-3029-48a5-96b8-6f8296acd34c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.418390 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-config" (OuterVolumeSpecName: "config") pod "0d6fded7-3029-48a5-96b8-6f8296acd34c" (UID: "0d6fded7-3029-48a5-96b8-6f8296acd34c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.461676 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwfmh\" (UniqueName: \"kubernetes.io/projected/0d6fded7-3029-48a5-96b8-6f8296acd34c-kube-api-access-xwfmh\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.461713 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.461722 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6fded7-3029-48a5-96b8-6f8296acd34c-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.590128 4727 generic.go:334] "Generic (PLEG): container finished" podID="0d6fded7-3029-48a5-96b8-6f8296acd34c" containerID="236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23" exitCode=0 Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.590217 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.590265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" event={"ID":"0d6fded7-3029-48a5-96b8-6f8296acd34c","Type":"ContainerDied","Data":"236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23"} Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.590304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7d78q" event={"ID":"0d6fded7-3029-48a5-96b8-6f8296acd34c","Type":"ContainerDied","Data":"fde93f1dc5820c19bc780299ace0a04e07047ac792f8376597ae0b1f4dea997d"} Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.590353 4727 scope.go:117] "RemoveContainer" containerID="236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.593154 4727 generic.go:334] "Generic (PLEG): container finished" podID="dd2d4cb1-22c9-4a83-a377-799a48b75298" containerID="57f6cbe7e1b94d02d13977056bd83bec17acf47ee00d1a7c49165a2f91a66058" exitCode=0 Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.593189 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2fq6-config-46bk9" event={"ID":"dd2d4cb1-22c9-4a83-a377-799a48b75298","Type":"ContainerDied","Data":"57f6cbe7e1b94d02d13977056bd83bec17acf47ee00d1a7c49165a2f91a66058"} Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.614611 4727 scope.go:117] "RemoveContainer" containerID="37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.650613 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7d78q"] Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.655070 4727 scope.go:117] "RemoveContainer" containerID="236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23" Dec 10 14:55:45 crc kubenswrapper[4727]: E1210 14:55:45.656354 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23\": container with ID starting with 236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23 not found: ID does not exist" containerID="236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.656402 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23"} err="failed to get container status \"236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23\": rpc error: code = NotFound desc = could not find container \"236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23\": container with ID starting with 236a24fb90407a7c69205feab67cf66a8b04820d59b138b7de60667c2b855c23 not found: ID does not exist" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.656435 4727 scope.go:117] "RemoveContainer" containerID="37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2" Dec 10 14:55:45 crc kubenswrapper[4727]: E1210 14:55:45.658718 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2\": container with ID starting with 37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2 not found: ID does not exist" containerID="37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.658749 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2"} err="failed to get container status \"37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2\": rpc error: code = NotFound desc = could not find container \"37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2\": container with ID starting with 37ce970a3c990b7babd4d797d66378f20c0d3a91757876ec724bd2a734e24df2 not found: ID does not exist" Dec 10 14:55:45 crc kubenswrapper[4727]: I1210 14:55:45.660759 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7d78q"] Dec 10 14:55:46 crc kubenswrapper[4727]: I1210 14:55:46.074804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:55:46 crc kubenswrapper[4727]: E1210 14:55:46.075037 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:46 crc kubenswrapper[4727]: E1210 14:55:46.075280 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:46 crc kubenswrapper[4727]: E1210 14:55:46.075418 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift podName:2b2c88bb-9134-46aa-8595-4762fca3fb57 nodeName:}" failed. No retries permitted until 2025-12-10 14:56:02.075396656 +0000 UTC m=+1466.270171198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift") pod "swift-storage-0" (UID: "2b2c88bb-9134-46aa-8595-4762fca3fb57") : configmap "swift-ring-files" not found Dec 10 14:55:46 crc kubenswrapper[4727]: I1210 14:55:46.575965 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6fded7-3029-48a5-96b8-6f8296acd34c" path="/var/lib/kubelet/pods/0d6fded7-3029-48a5-96b8-6f8296acd34c/volumes" Dec 10 14:55:50 crc kubenswrapper[4727]: I1210 14:55:50.112384 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 10 14:55:51 crc kubenswrapper[4727]: I1210 14:55:51.380542 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 14:55:51 crc kubenswrapper[4727]: I1210 14:55:51.666664 4727 generic.go:334] "Generic (PLEG): container finished" podID="56fa8f94-70a1-47ce-85c5-947e889ba79c" containerID="7bd2f45e7d4ed83fe1e23772bdac9e4925340b4b9d40fd630928cdee63678831" exitCode=0 Dec 10 14:55:51 crc kubenswrapper[4727]: I1210 14:55:51.666726 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-r4x4n" event={"ID":"56fa8f94-70a1-47ce-85c5-947e889ba79c","Type":"ContainerDied","Data":"7bd2f45e7d4ed83fe1e23772bdac9e4925340b4b9d40fd630928cdee63678831"} Dec 10 14:55:51 crc kubenswrapper[4727]: I1210 14:55:51.979493 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-x2fq6" Dec 10 14:55:53 crc kubenswrapper[4727]: I1210 14:55:53.797556 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.852521 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.858652 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.982520 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-ring-data-devices\") pod \"56fa8f94-70a1-47ce-85c5-947e889ba79c\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.983283 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "56fa8f94-70a1-47ce-85c5-947e889ba79c" (UID: "56fa8f94-70a1-47ce-85c5-947e889ba79c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.983354 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-additional-scripts\") pod \"dd2d4cb1-22c9-4a83-a377-799a48b75298\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.983754 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "dd2d4cb1-22c9-4a83-a377-799a48b75298" (UID: "dd2d4cb1-22c9-4a83-a377-799a48b75298"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.983830 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-scripts\") pod \"dd2d4cb1-22c9-4a83-a377-799a48b75298\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.984505 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-scripts" (OuterVolumeSpecName: "scripts") pod "dd2d4cb1-22c9-4a83-a377-799a48b75298" (UID: "dd2d4cb1-22c9-4a83-a377-799a48b75298"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985203 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj65f\" (UniqueName: \"kubernetes.io/projected/56fa8f94-70a1-47ce-85c5-947e889ba79c-kube-api-access-lj65f\") pod \"56fa8f94-70a1-47ce-85c5-947e889ba79c\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985343 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-log-ovn\") pod \"dd2d4cb1-22c9-4a83-a377-799a48b75298\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985410 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "dd2d4cb1-22c9-4a83-a377-799a48b75298" (UID: "dd2d4cb1-22c9-4a83-a377-799a48b75298"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985460 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-dispersionconf\") pod \"56fa8f94-70a1-47ce-85c5-947e889ba79c\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985500 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run\") pod \"dd2d4cb1-22c9-4a83-a377-799a48b75298\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985526 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/56fa8f94-70a1-47ce-85c5-947e889ba79c-etc-swift\") pod \"56fa8f94-70a1-47ce-85c5-947e889ba79c\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985550 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-combined-ca-bundle\") pod \"56fa8f94-70a1-47ce-85c5-947e889ba79c\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985575 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-scripts\") pod \"56fa8f94-70a1-47ce-85c5-947e889ba79c\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985596 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzv6z\" (UniqueName: \"kubernetes.io/projected/dd2d4cb1-22c9-4a83-a377-799a48b75298-kube-api-access-wzv6z\") pod \"dd2d4cb1-22c9-4a83-a377-799a48b75298\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985610 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run-ovn\") pod \"dd2d4cb1-22c9-4a83-a377-799a48b75298\" (UID: \"dd2d4cb1-22c9-4a83-a377-799a48b75298\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.985626 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-swiftconf\") pod \"56fa8f94-70a1-47ce-85c5-947e889ba79c\" (UID: \"56fa8f94-70a1-47ce-85c5-947e889ba79c\") " Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.986139 4727 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.986159 4727 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.986169 4727 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.986178 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd2d4cb1-22c9-4a83-a377-799a48b75298-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.987218 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run" (OuterVolumeSpecName: "var-run") pod "dd2d4cb1-22c9-4a83-a377-799a48b75298" (UID: "dd2d4cb1-22c9-4a83-a377-799a48b75298"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.987259 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "dd2d4cb1-22c9-4a83-a377-799a48b75298" (UID: "dd2d4cb1-22c9-4a83-a377-799a48b75298"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.990851 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2d4cb1-22c9-4a83-a377-799a48b75298-kube-api-access-wzv6z" (OuterVolumeSpecName: "kube-api-access-wzv6z") pod "dd2d4cb1-22c9-4a83-a377-799a48b75298" (UID: "dd2d4cb1-22c9-4a83-a377-799a48b75298"). InnerVolumeSpecName "kube-api-access-wzv6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.991499 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56fa8f94-70a1-47ce-85c5-947e889ba79c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "56fa8f94-70a1-47ce-85c5-947e889ba79c" (UID: "56fa8f94-70a1-47ce-85c5-947e889ba79c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.995002 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fa8f94-70a1-47ce-85c5-947e889ba79c-kube-api-access-lj65f" (OuterVolumeSpecName: "kube-api-access-lj65f") pod "56fa8f94-70a1-47ce-85c5-947e889ba79c" (UID: "56fa8f94-70a1-47ce-85c5-947e889ba79c"). InnerVolumeSpecName "kube-api-access-lj65f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:55 crc kubenswrapper[4727]: I1210 14:55:55.995259 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "56fa8f94-70a1-47ce-85c5-947e889ba79c" (UID: "56fa8f94-70a1-47ce-85c5-947e889ba79c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.010635 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-scripts" (OuterVolumeSpecName: "scripts") pod "56fa8f94-70a1-47ce-85c5-947e889ba79c" (UID: "56fa8f94-70a1-47ce-85c5-947e889ba79c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.012702 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "56fa8f94-70a1-47ce-85c5-947e889ba79c" (UID: "56fa8f94-70a1-47ce-85c5-947e889ba79c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.037723 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56fa8f94-70a1-47ce-85c5-947e889ba79c" (UID: "56fa8f94-70a1-47ce-85c5-947e889ba79c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.087309 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj65f\" (UniqueName: \"kubernetes.io/projected/56fa8f94-70a1-47ce-85c5-947e889ba79c-kube-api-access-lj65f\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.087341 4727 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.087351 4727 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.087359 4727 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/56fa8f94-70a1-47ce-85c5-947e889ba79c-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.087367 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.087375 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56fa8f94-70a1-47ce-85c5-947e889ba79c-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.087384 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzv6z\" (UniqueName: \"kubernetes.io/projected/dd2d4cb1-22c9-4a83-a377-799a48b75298-kube-api-access-wzv6z\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.087392 4727 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2d4cb1-22c9-4a83-a377-799a48b75298-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.087400 4727 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/56fa8f94-70a1-47ce-85c5-947e889ba79c-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:56 crc kubenswrapper[4727]: E1210 14:55:56.719536 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 10 14:55:56 crc kubenswrapper[4727]: E1210 14:55:56.719752 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47ztr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-zb5mz_openstack(20c3a0fe-e0f7-4f79-ae22-d143511424e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:56 crc kubenswrapper[4727]: E1210 14:55:56.721139 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-zb5mz" podUID="20c3a0fe-e0f7-4f79-ae22-d143511424e9" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.725000 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2fq6-config-46bk9" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.725949 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2fq6-config-46bk9" event={"ID":"dd2d4cb1-22c9-4a83-a377-799a48b75298","Type":"ContainerDied","Data":"fbc0a9fa6f80e535d596a82ba331c6182e2d271d2959d1f1e53e7943cc12e4db"} Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.725978 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbc0a9fa6f80e535d596a82ba331c6182e2d271d2959d1f1e53e7943cc12e4db" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.730764 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-r4x4n" event={"ID":"56fa8f94-70a1-47ce-85c5-947e889ba79c","Type":"ContainerDied","Data":"da5cad65a9566f9216cbbc857dad9abdba41ee4b5ec1477ef808ee412abb1059"} Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.730796 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da5cad65a9566f9216cbbc857dad9abdba41ee4b5ec1477ef808ee412abb1059" Dec 10 14:55:56 crc kubenswrapper[4727]: I1210 14:55:56.731063 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r4x4n" Dec 10 14:55:57 crc kubenswrapper[4727]: I1210 14:55:57.019649 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x2fq6-config-46bk9"] Dec 10 14:55:57 crc kubenswrapper[4727]: I1210 14:55:57.036427 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x2fq6-config-46bk9"] Dec 10 14:55:57 crc kubenswrapper[4727]: I1210 14:55:57.755831 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flmw8" event={"ID":"e8436452-f31c-4b3b-9c04-2fddf8668b1e","Type":"ContainerStarted","Data":"1fc02b4b791ff371a7d0d6d30185eaad8421a39fae49a8ca72d275cf2315a9de"} Dec 10 14:55:57 crc kubenswrapper[4727]: E1210 14:55:57.756260 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-zb5mz" podUID="20c3a0fe-e0f7-4f79-ae22-d143511424e9" Dec 10 14:55:58 crc kubenswrapper[4727]: I1210 14:55:58.600189 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2d4cb1-22c9-4a83-a377-799a48b75298" path="/var/lib/kubelet/pods/dd2d4cb1-22c9-4a83-a377-799a48b75298/volumes" Dec 10 14:55:59 crc kubenswrapper[4727]: I1210 14:55:59.775343 4727 generic.go:334] "Generic (PLEG): container finished" podID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerID="1fc02b4b791ff371a7d0d6d30185eaad8421a39fae49a8ca72d275cf2315a9de" exitCode=0 Dec 10 14:55:59 crc kubenswrapper[4727]: I1210 14:55:59.775393 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flmw8" event={"ID":"e8436452-f31c-4b3b-9c04-2fddf8668b1e","Type":"ContainerDied","Data":"1fc02b4b791ff371a7d0d6d30185eaad8421a39fae49a8ca72d275cf2315a9de"} Dec 10 14:56:01 crc kubenswrapper[4727]: I1210 14:56:01.373158 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 14:56:02 crc kubenswrapper[4727]: I1210 14:56:02.116760 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:56:02 crc kubenswrapper[4727]: I1210 14:56:02.124608 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2b2c88bb-9134-46aa-8595-4762fca3fb57-etc-swift\") pod \"swift-storage-0\" (UID: \"2b2c88bb-9134-46aa-8595-4762fca3fb57\") " pod="openstack/swift-storage-0" Dec 10 14:56:02 crc kubenswrapper[4727]: I1210 14:56:02.385722 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 10 14:56:03 crc kubenswrapper[4727]: I1210 14:56:03.757835 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 14:56:06 crc kubenswrapper[4727]: I1210 14:56:06.265240 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 10 14:56:06 crc kubenswrapper[4727]: I1210 14:56:06.840243 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"51d9cec6bbe9a12446fdb3cad8cd24453b09ad6aa5929f7e3a5d97b35f1014d3"} Dec 10 14:56:06 crc kubenswrapper[4727]: I1210 14:56:06.842451 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5phw9" event={"ID":"70a282a3-fd71-4fd7-9a0b-7871a930affc","Type":"ContainerStarted","Data":"d2b2f1777571005041008f97868685df5fa6514438ba93a1d36545c1bfbc1af5"} Dec 10 14:56:06 crc kubenswrapper[4727]: I1210 14:56:06.850845 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerStarted","Data":"fb7b38fb401c1cb71ca1fdfcf19bef76c240b1fd63c4f35923f5febd779b629c"} Dec 10 14:56:06 crc kubenswrapper[4727]: I1210 14:56:06.854055 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flmw8" event={"ID":"e8436452-f31c-4b3b-9c04-2fddf8668b1e","Type":"ContainerStarted","Data":"c7976b327b1eafbdb39babe9616c355feedcb39f40775f6622b3b81f69cc4f0b"} Dec 10 14:56:06 crc kubenswrapper[4727]: I1210 14:56:06.875194 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5phw9" podStartSLOduration=2.514439502 podStartE2EDuration="24.875170777s" podCreationTimestamp="2025-12-10 14:55:42 +0000 UTC" firstStartedPulling="2025-12-10 14:55:43.11071988 +0000 UTC m=+1447.305494432" lastFinishedPulling="2025-12-10 14:56:05.471451165 +0000 UTC m=+1469.666225707" observedRunningTime="2025-12-10 14:56:06.862166799 +0000 UTC m=+1471.056941341" watchObservedRunningTime="2025-12-10 14:56:06.875170777 +0000 UTC m=+1471.069945319" Dec 10 14:56:06 crc kubenswrapper[4727]: I1210 14:56:06.894998 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.637844699 podStartE2EDuration="1m58.894977728s" podCreationTimestamp="2025-12-10 14:54:08 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.227253773 +0000 UTC m=+1373.422028315" lastFinishedPulling="2025-12-10 14:56:05.484386802 +0000 UTC m=+1469.679161344" observedRunningTime="2025-12-10 14:56:06.891640983 +0000 UTC m=+1471.086415525" watchObservedRunningTime="2025-12-10 14:56:06.894977728 +0000 UTC m=+1471.089752270" Dec 10 14:56:06 crc kubenswrapper[4727]: I1210 14:56:06.922459 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-flmw8" podStartSLOduration=3.977281516 podStartE2EDuration="24.922437591s" podCreationTimestamp="2025-12-10 14:55:42 +0000 UTC" firstStartedPulling="2025-12-10 14:55:44.591234131 +0000 UTC m=+1448.786008673" lastFinishedPulling="2025-12-10 14:56:05.536390206 +0000 UTC m=+1469.731164748" observedRunningTime="2025-12-10 14:56:06.918180464 +0000 UTC m=+1471.112955006" watchObservedRunningTime="2025-12-10 14:56:06.922437591 +0000 UTC m=+1471.117212133" Dec 10 14:56:07 crc kubenswrapper[4727]: I1210 14:56:07.723798 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:56:07 crc kubenswrapper[4727]: I1210 14:56:07.724209 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:56:08 crc kubenswrapper[4727]: I1210 14:56:08.880097 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"e671c31bfe5eb23867b189d7e1e877154770e601a0cd93e4e5d455d09f418477"} Dec 10 14:56:08 crc kubenswrapper[4727]: I1210 14:56:08.880426 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"666c5fb0266cb5162bfc883145c817889bd4772eacea6b43a31c736954de1863"} Dec 10 14:56:09 crc kubenswrapper[4727]: I1210 14:56:09.891479 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"1acadf6690e41ce3b41687c32a1aeba25434e2b7833dbd56a54fbe0cf25f1d3f"} Dec 10 14:56:09 crc kubenswrapper[4727]: I1210 14:56:09.891986 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"4ce4fd10e16dc0bbeb18c825caefdacd17dfbdfbbdb9f420aa5d63f701cd0af6"} Dec 10 14:56:10 crc kubenswrapper[4727]: I1210 14:56:10.239688 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:10 crc kubenswrapper[4727]: I1210 14:56:10.239758 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:10 crc kubenswrapper[4727]: I1210 14:56:10.243599 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:10 crc kubenswrapper[4727]: I1210 14:56:10.902736 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:11 crc kubenswrapper[4727]: I1210 14:56:11.374350 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 14:56:12 crc kubenswrapper[4727]: I1210 14:56:12.913824 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:56:12 crc kubenswrapper[4727]: I1210 14:56:12.914427 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:56:12 crc kubenswrapper[4727]: I1210 14:56:12.926817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zb5mz" event={"ID":"20c3a0fe-e0f7-4f79-ae22-d143511424e9","Type":"ContainerStarted","Data":"d92b4d40fd1384c1b441e23abb935ba80a45e0204cb1ef208abc372e63d83fc4"} Dec 10 14:56:12 crc kubenswrapper[4727]: I1210 14:56:12.947231 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-zb5mz" podStartSLOduration=3.909660877 podStartE2EDuration="40.947200723s" podCreationTimestamp="2025-12-10 14:55:32 +0000 UTC" firstStartedPulling="2025-12-10 14:55:34.859336101 +0000 UTC m=+1439.054110643" lastFinishedPulling="2025-12-10 14:56:11.896875957 +0000 UTC m=+1476.091650489" observedRunningTime="2025-12-10 14:56:12.944923266 +0000 UTC m=+1477.139697818" watchObservedRunningTime="2025-12-10 14:56:12.947200723 +0000 UTC m=+1477.141975265" Dec 10 14:56:13 crc kubenswrapper[4727]: I1210 14:56:13.946897 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"de5976726b134776782c7642500d7afcfe7a98682ef531352324d61ddef60b89"} Dec 10 14:56:13 crc kubenswrapper[4727]: I1210 14:56:13.947265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"820e2609de58f144bac86c45a7f610f28e6f44de5e2ed5828bcf66fbd543ebb9"} Dec 10 14:56:13 crc kubenswrapper[4727]: I1210 14:56:13.947281 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"6e4a04290f8e36ce696ab84f1adc1632c73798541d766d4bca33ab2757e7d7ff"} Dec 10 14:56:13 crc kubenswrapper[4727]: I1210 14:56:13.969667 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flmw8" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerName="registry-server" probeResult="failure" output=< Dec 10 14:56:13 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 14:56:13 crc kubenswrapper[4727]: > Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.384852 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.385447 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="prometheus" containerID="cri-o://b3f7d4a4588cb831b89c9cd6668b9e17e95bf5ae78c88c6ebc35ec6e0f065e3e" gracePeriod=600 Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.385548 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="thanos-sidecar" containerID="cri-o://fb7b38fb401c1cb71ca1fdfcf19bef76c240b1fd63c4f35923f5febd779b629c" gracePeriod=600 Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.385564 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="config-reloader" containerID="cri-o://6dd467180182be7b79757d8d468cd20ec6607a5b859cb7c54be409e81b686d72" gracePeriod=600 Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.983560 4727 generic.go:334] "Generic (PLEG): container finished" podID="70a282a3-fd71-4fd7-9a0b-7871a930affc" containerID="d2b2f1777571005041008f97868685df5fa6514438ba93a1d36545c1bfbc1af5" exitCode=0 Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.983660 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5phw9" event={"ID":"70a282a3-fd71-4fd7-9a0b-7871a930affc","Type":"ContainerDied","Data":"d2b2f1777571005041008f97868685df5fa6514438ba93a1d36545c1bfbc1af5"} Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.987191 4727 generic.go:334] "Generic (PLEG): container finished" podID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerID="fb7b38fb401c1cb71ca1fdfcf19bef76c240b1fd63c4f35923f5febd779b629c" exitCode=0 Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.987213 4727 generic.go:334] "Generic (PLEG): container finished" podID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerID="6dd467180182be7b79757d8d468cd20ec6607a5b859cb7c54be409e81b686d72" exitCode=0 Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.987222 4727 generic.go:334] "Generic (PLEG): container finished" podID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerID="b3f7d4a4588cb831b89c9cd6668b9e17e95bf5ae78c88c6ebc35ec6e0f065e3e" exitCode=0 Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.987284 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerDied","Data":"fb7b38fb401c1cb71ca1fdfcf19bef76c240b1fd63c4f35923f5febd779b629c"} Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.987333 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerDied","Data":"6dd467180182be7b79757d8d468cd20ec6607a5b859cb7c54be409e81b686d72"} Dec 10 14:56:14 crc kubenswrapper[4727]: I1210 14:56:14.987344 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerDied","Data":"b3f7d4a4588cb831b89c9cd6668b9e17e95bf5ae78c88c6ebc35ec6e0f065e3e"} Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.000239 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"47e2cf7b6975e2779328ce8ec3598bea1b91ff4b7f2530c0d3f33eaf96c0faaf"} Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.578569 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.655408 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-tls-assets\") pod \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.655513 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config-out\") pod \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.655568 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm6jp\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-kube-api-access-qm6jp\") pod \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.655614 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-web-config\") pod \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.655649 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config\") pod \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.655715 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-thanos-prometheus-http-client-file\") pod \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.655871 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") pod \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.655954 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-prometheus-metric-storage-rulefiles-0\") pod \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\" (UID: \"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d\") " Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.659132 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" (UID: "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.663829 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-kube-api-access-qm6jp" (OuterVolumeSpecName: "kube-api-access-qm6jp") pod "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" (UID: "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d"). InnerVolumeSpecName "kube-api-access-qm6jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.667192 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" (UID: "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.676638 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" (UID: "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.683131 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config" (OuterVolumeSpecName: "config") pod "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" (UID: "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.684138 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config-out" (OuterVolumeSpecName: "config-out") pod "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" (UID: "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.684230 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" (UID: "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d"). InnerVolumeSpecName "pvc-cf724714-665b-4af6-a045-3da7b60440bb". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.701602 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-web-config" (OuterVolumeSpecName: "web-config") pod "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" (UID: "31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.758691 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") on node \"crc\" " Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.758739 4727 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.758755 4727 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.758771 4727 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config-out\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.758784 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm6jp\" (UniqueName: \"kubernetes.io/projected/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-kube-api-access-qm6jp\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.758797 4727 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-web-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.758808 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.758822 4727 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.786629 4727 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.786791 4727 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cf724714-665b-4af6-a045-3da7b60440bb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb") on node "crc" Dec 10 14:56:15 crc kubenswrapper[4727]: I1210 14:56:15.861058 4727 reconciler_common.go:293] "Volume detached for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.013515 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d","Type":"ContainerDied","Data":"2980d71c9993d0d78dcebf5a400616640a7dc840609dad61b7bcab33576bf477"} Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.013570 4727 scope.go:117] "RemoveContainer" containerID="fb7b38fb401c1cb71ca1fdfcf19bef76c240b1fd63c4f35923f5febd779b629c" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.013570 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.019653 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"23f401eb0f5f67b6bceed6b914014ce63e028bf88dc2f2075ce1b8d7c9406b2e"} Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.061549 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.065276 4727 scope.go:117] "RemoveContainer" containerID="6dd467180182be7b79757d8d468cd20ec6607a5b859cb7c54be409e81b686d72" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.071486 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.095187 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:16 crc kubenswrapper[4727]: E1210 14:56:16.095884 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fa8f94-70a1-47ce-85c5-947e889ba79c" containerName="swift-ring-rebalance" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.096062 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fa8f94-70a1-47ce-85c5-947e889ba79c" containerName="swift-ring-rebalance" Dec 10 14:56:16 crc kubenswrapper[4727]: E1210 14:56:16.096160 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="thanos-sidecar" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.096234 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="thanos-sidecar" Dec 10 14:56:16 crc kubenswrapper[4727]: E1210 14:56:16.096327 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="prometheus" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.096409 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="prometheus" Dec 10 14:56:16 crc kubenswrapper[4727]: E1210 14:56:16.096485 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2d4cb1-22c9-4a83-a377-799a48b75298" containerName="ovn-config" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.096572 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2d4cb1-22c9-4a83-a377-799a48b75298" containerName="ovn-config" Dec 10 14:56:16 crc kubenswrapper[4727]: E1210 14:56:16.096721 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6fded7-3029-48a5-96b8-6f8296acd34c" containerName="init" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.096802 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6fded7-3029-48a5-96b8-6f8296acd34c" containerName="init" Dec 10 14:56:16 crc kubenswrapper[4727]: E1210 14:56:16.096885 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="config-reloader" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.096999 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="config-reloader" Dec 10 14:56:16 crc kubenswrapper[4727]: E1210 14:56:16.097093 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6fded7-3029-48a5-96b8-6f8296acd34c" containerName="dnsmasq-dns" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.097177 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6fded7-3029-48a5-96b8-6f8296acd34c" containerName="dnsmasq-dns" Dec 10 14:56:16 crc kubenswrapper[4727]: E1210 14:56:16.097276 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="init-config-reloader" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.097362 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="init-config-reloader" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.097740 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="config-reloader" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.097846 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2d4cb1-22c9-4a83-a377-799a48b75298" containerName="ovn-config" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.097963 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="thanos-sidecar" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.098089 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="prometheus" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.098191 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6fded7-3029-48a5-96b8-6f8296acd34c" containerName="dnsmasq-dns" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.098294 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fa8f94-70a1-47ce-85c5-947e889ba79c" containerName="swift-ring-rebalance" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.101943 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.105326 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.105529 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.105636 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.105348 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.106013 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.109074 4727 scope.go:117] "RemoveContainer" containerID="b3f7d4a4588cb831b89c9cd6668b9e17e95bf5ae78c88c6ebc35ec6e0f065e3e" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.109422 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-ls2tl" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.121660 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.137931 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.161121 4727 scope.go:117] "RemoveContainer" containerID="d5cbbdaa3831a9b4f41b599a46dac5b70174964e0f13a445afa5da20fb67ea16" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.288494 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkdgs\" (UniqueName: \"kubernetes.io/projected/d61f7e3d-578d-4429-ad9c-31ba3e8be091-kube-api-access-dkdgs\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.288832 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.288889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.288942 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.289030 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d61f7e3d-578d-4429-ad9c-31ba3e8be091-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.289068 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d61f7e3d-578d-4429-ad9c-31ba3e8be091-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.289104 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-config\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.289150 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.289191 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.289215 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.289326 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d61f7e3d-578d-4429-ad9c-31ba3e8be091-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.401923 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402001 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d61f7e3d-578d-4429-ad9c-31ba3e8be091-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d61f7e3d-578d-4429-ad9c-31ba3e8be091-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-config\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402243 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d61f7e3d-578d-4429-ad9c-31ba3e8be091-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402529 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkdgs\" (UniqueName: \"kubernetes.io/projected/d61f7e3d-578d-4429-ad9c-31ba3e8be091-kube-api-access-dkdgs\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.402556 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.410989 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d61f7e3d-578d-4429-ad9c-31ba3e8be091-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.415595 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.415644 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bfabbbee530072909eb6f2445f2c3f4483c354d55e6afa92ad99487deb5d849d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.415733 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.417204 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d61f7e3d-578d-4429-ad9c-31ba3e8be091-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.418168 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.418563 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d61f7e3d-578d-4429-ad9c-31ba3e8be091-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.419162 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.420793 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.435789 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.436716 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d61f7e3d-578d-4429-ad9c-31ba3e8be091-config\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.439892 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkdgs\" (UniqueName: \"kubernetes.io/projected/d61f7e3d-578d-4429-ad9c-31ba3e8be091-kube-api-access-dkdgs\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.457488 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5phw9" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.489303 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf724714-665b-4af6-a045-3da7b60440bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf724714-665b-4af6-a045-3da7b60440bb\") pod \"prometheus-metric-storage-0\" (UID: \"d61f7e3d-578d-4429-ad9c-31ba3e8be091\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.578703 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" path="/var/lib/kubelet/pods/31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d/volumes" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.613248 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-config-data\") pod \"70a282a3-fd71-4fd7-9a0b-7871a930affc\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.613320 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-combined-ca-bundle\") pod \"70a282a3-fd71-4fd7-9a0b-7871a930affc\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.613505 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bcld\" (UniqueName: \"kubernetes.io/projected/70a282a3-fd71-4fd7-9a0b-7871a930affc-kube-api-access-5bcld\") pod \"70a282a3-fd71-4fd7-9a0b-7871a930affc\" (UID: \"70a282a3-fd71-4fd7-9a0b-7871a930affc\") " Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.621506 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a282a3-fd71-4fd7-9a0b-7871a930affc-kube-api-access-5bcld" (OuterVolumeSpecName: "kube-api-access-5bcld") pod "70a282a3-fd71-4fd7-9a0b-7871a930affc" (UID: "70a282a3-fd71-4fd7-9a0b-7871a930affc"). InnerVolumeSpecName "kube-api-access-5bcld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.652052 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70a282a3-fd71-4fd7-9a0b-7871a930affc" (UID: "70a282a3-fd71-4fd7-9a0b-7871a930affc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.691914 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-config-data" (OuterVolumeSpecName: "config-data") pod "70a282a3-fd71-4fd7-9a0b-7871a930affc" (UID: "70a282a3-fd71-4fd7-9a0b-7871a930affc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.715500 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.715546 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a282a3-fd71-4fd7-9a0b-7871a930affc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.715557 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bcld\" (UniqueName: \"kubernetes.io/projected/70a282a3-fd71-4fd7-9a0b-7871a930affc-kube-api-access-5bcld\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:16 crc kubenswrapper[4727]: I1210 14:56:16.743575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.078516 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"09e15a8969aee2a4ea987e089ceb30fb5cc90c949932338d2efa078f28b78c2e"} Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.078564 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"469402d2acd3c90cafc69630189a23e8a965620f9108ddfa3681e6b8b8bc9350"} Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.078573 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"31ea188152d15473e5f8adfd4953e1bed5fc55e7bee225c4fdd959ddf0abda75"} Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.078581 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"ab7f8b7796878f0a35eed1269db1079d367a7713cad059dd6c16452fb4679e4b"} Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.080495 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5phw9" event={"ID":"70a282a3-fd71-4fd7-9a0b-7871a930affc","Type":"ContainerDied","Data":"c81879d4cb3078698c57168f23de60196d73b10d43e4bec9f242739920c35f09"} Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.080533 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81879d4cb3078698c57168f23de60196d73b10d43e4bec9f242739920c35f09" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.080599 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5phw9" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.299276 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nzrms"] Dec 10 14:56:17 crc kubenswrapper[4727]: E1210 14:56:17.299892 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a282a3-fd71-4fd7-9a0b-7871a930affc" containerName="keystone-db-sync" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.299932 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a282a3-fd71-4fd7-9a0b-7871a930affc" containerName="keystone-db-sync" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.300188 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a282a3-fd71-4fd7-9a0b-7871a930affc" containerName="keystone-db-sync" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.301644 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.312798 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nzrms"] Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.358992 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.423176 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2479h"] Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.425358 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.433495 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.433770 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pw9zq" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.443226 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.443322 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvkt\" (UniqueName: \"kubernetes.io/projected/21b24a6e-1a57-447a-9452-e9644eb543b3-kube-api-access-2qvkt\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.443371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.443418 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-dns-svc\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.443920 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-config\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.466653 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.467169 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.467939 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.534185 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2479h"] Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546370 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-config-data\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546431 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9m5w\" (UniqueName: \"kubernetes.io/projected/5d1bf00b-226f-4103-b12d-551e0974c8da-kube-api-access-z9m5w\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546472 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-config\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546508 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546552 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-scripts\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546579 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qvkt\" (UniqueName: \"kubernetes.io/projected/21b24a6e-1a57-447a-9452-e9644eb543b3-kube-api-access-2qvkt\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546623 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546676 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-dns-svc\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546723 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-credential-keys\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546776 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-combined-ca-bundle\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.546868 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-fernet-keys\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.555575 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-config\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.560113 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.563796 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.564478 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-dns-svc\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.646956 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qvkt\" (UniqueName: \"kubernetes.io/projected/21b24a6e-1a57-447a-9452-e9644eb543b3-kube-api-access-2qvkt\") pod \"dnsmasq-dns-f877ddd87-nzrms\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.650302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-fernet-keys\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.650620 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-config-data\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.650649 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9m5w\" (UniqueName: \"kubernetes.io/projected/5d1bf00b-226f-4103-b12d-551e0974c8da-kube-api-access-z9m5w\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.650718 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-scripts\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.650806 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-credential-keys\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.650848 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-combined-ca-bundle\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.658526 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-scripts\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.663891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-combined-ca-bundle\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.671297 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-config-data\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.674932 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-credential-keys\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.675615 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-fernet-keys\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.731401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9m5w\" (UniqueName: \"kubernetes.io/projected/5d1bf00b-226f-4103-b12d-551e0974c8da-kube-api-access-z9m5w\") pod \"keystone-bootstrap-2479h\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.847463 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.928459 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.983719 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8s8pt"] Dec 10 14:56:17 crc kubenswrapper[4727]: I1210 14:56:17.985521 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.013096 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-l9hd6"] Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.015626 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.024780 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vlk2f" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.027592 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-l9hd6"] Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.038188 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.038638 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.068515 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-combined-ca-bundle\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.068592 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-config\") pod \"neutron-db-sync-l9hd6\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.068679 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-scripts\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.068704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-config-data\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.068775 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-db-sync-config-data\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.068811 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65nbx\" (UniqueName: \"kubernetes.io/projected/672a3a2e-19cb-4512-a908-c8d6f16753f7-kube-api-access-65nbx\") pod \"neutron-db-sync-l9hd6\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.068852 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-combined-ca-bundle\") pod \"neutron-db-sync-l9hd6\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.069041 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxxz\" (UniqueName: \"kubernetes.io/projected/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-kube-api-access-jrxxz\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.069096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-etc-machine-id\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.089451 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sf5vj" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.089649 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.089813 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.097140 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-vrmr7"] Dec 10 14:56:18 crc kubenswrapper[4727]: I1210 14:56:18.099535 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.139066 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.141686 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.141794 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hd6hm" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.142971 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8s8pt"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179251 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrxxz\" (UniqueName: \"kubernetes.io/projected/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-kube-api-access-jrxxz\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179358 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-etc-machine-id\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-combined-ca-bundle\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179456 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-config\") pod \"neutron-db-sync-l9hd6\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179519 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr5mn\" (UniqueName: \"kubernetes.io/projected/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-kube-api-access-fr5mn\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-config-data\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179593 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-scripts\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179606 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-etc-machine-id\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179618 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-config-data\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179675 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-combined-ca-bundle\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179714 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-logs\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179757 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-db-sync-config-data\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179782 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65nbx\" (UniqueName: \"kubernetes.io/projected/672a3a2e-19cb-4512-a908-c8d6f16753f7-kube-api-access-65nbx\") pod \"neutron-db-sync-l9hd6\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179833 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-combined-ca-bundle\") pod \"neutron-db-sync-l9hd6\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.179875 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-scripts\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.183727 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vrmr7"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.192546 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-combined-ca-bundle\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.197709 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-config\") pod \"neutron-db-sync-l9hd6\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.202448 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-combined-ca-bundle\") pod \"neutron-db-sync-l9hd6\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.202899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-scripts\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.207032 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-config-data\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.218834 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-db-sync-config-data\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.242078 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="31d2bc7a-cfd8-497a-bbc9-1a25d3388c4d" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.255237 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65nbx\" (UniqueName: \"kubernetes.io/projected/672a3a2e-19cb-4512-a908-c8d6f16753f7-kube-api-access-65nbx\") pod \"neutron-db-sync-l9hd6\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.272373 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrxxz\" (UniqueName: \"kubernetes.io/projected/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-kube-api-access-jrxxz\") pod \"cinder-db-sync-8s8pt\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.282273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"1dfca230bf90f0dc005d1516f87933e96b1efdfba7abf67144a6381db8a6d557"} Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.288792 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr5mn\" (UniqueName: \"kubernetes.io/projected/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-kube-api-access-fr5mn\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.288927 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-config-data\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.288996 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-combined-ca-bundle\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.289231 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-logs\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.289679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-scripts\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.294017 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nzrms"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.295211 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-logs\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.298836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-config-data\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.310240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-scripts\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.351549 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d61f7e3d-578d-4429-ad9c-31ba3e8be091","Type":"ContainerStarted","Data":"3a5fc05994c691456b8e135a0b5d3201de5f707ac36e4b2b2fda5eb005de235a"} Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.354713 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-8fgtj"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.354991 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-combined-ca-bundle\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.365260 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.370014 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr5mn\" (UniqueName: \"kubernetes.io/projected/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-kube-api-access-fr5mn\") pod \"placement-db-sync-vrmr7\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.373475 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-66mjv" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.373888 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.395379 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.396354 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.444181 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vrmr7" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.489241 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8fgtj"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.507496 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-combined-ca-bundle\") pod \"barbican-db-sync-8fgtj\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.507556 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv7nz\" (UniqueName: \"kubernetes.io/projected/eb7744ce-28ea-4e2f-a20e-a925b562e221-kube-api-access-gv7nz\") pod \"barbican-db-sync-8fgtj\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.507593 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-db-sync-config-data\") pod \"barbican-db-sync-8fgtj\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.542591 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-rnv6s"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.544345 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.550668 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-jlplt" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.550937 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.551048 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.561075 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.611303 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-combined-ca-bundle\") pod \"barbican-db-sync-8fgtj\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.611353 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-combined-ca-bundle\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.611387 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv7nz\" (UniqueName: \"kubernetes.io/projected/eb7744ce-28ea-4e2f-a20e-a925b562e221-kube-api-access-gv7nz\") pod \"barbican-db-sync-8fgtj\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.611434 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-db-sync-config-data\") pod \"barbican-db-sync-8fgtj\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.611951 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-scripts\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.612073 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg2k8\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-kube-api-access-rg2k8\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.612117 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-certs\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.612428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-config-data\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.616431 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-db-sync-config-data\") pod \"barbican-db-sync-8fgtj\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.628258 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-combined-ca-bundle\") pod \"barbican-db-sync-8fgtj\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.628767 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-g5l52"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.632022 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.646587 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv7nz\" (UniqueName: \"kubernetes.io/projected/eb7744ce-28ea-4e2f-a20e-a925b562e221-kube-api-access-gv7nz\") pod \"barbican-db-sync-8fgtj\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.662045 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-rnv6s"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.708657 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.719964 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2k8\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-kube-api-access-rg2k8\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.720014 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.720058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-certs\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.720106 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-config-data\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.720145 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.720160 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-config\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.720225 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.720252 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x75w\" (UniqueName: \"kubernetes.io/projected/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-kube-api-access-8x75w\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.720283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-combined-ca-bundle\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.720344 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-scripts\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.723069 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.726084 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.728662 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.728877 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.736512 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-certs\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.738487 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-scripts\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.739082 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-config-data\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.741611 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-combined-ca-bundle\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.765385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg2k8\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-kube-api-access-rg2k8\") pod \"cloudkitty-db-sync-rnv6s\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.771426 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-g5l52"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.803400 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.821853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.821976 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.821995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-config\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.822029 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.822050 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fbht\" (UniqueName: \"kubernetes.io/projected/353e8adf-9611-450e-ac07-270a2550ee0e-kube-api-access-7fbht\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.822074 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-run-httpd\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.822096 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.822147 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x75w\" (UniqueName: \"kubernetes.io/projected/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-kube-api-access-8x75w\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.822167 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-log-httpd\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.822210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-scripts\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.822253 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-config-data\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.822289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.823421 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.824702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-config\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.825106 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.825460 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.856776 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x75w\" (UniqueName: \"kubernetes.io/projected/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-kube-api-access-8x75w\") pod \"dnsmasq-dns-68dcc9cf6f-g5l52\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.881468 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.924454 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.924524 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fbht\" (UniqueName: \"kubernetes.io/projected/353e8adf-9611-450e-ac07-270a2550ee0e-kube-api-access-7fbht\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.924562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-run-httpd\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.924627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-log-httpd\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.924675 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-scripts\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.924727 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-config-data\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.924778 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.925292 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.927062 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-run-httpd\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.929808 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.934319 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-log-httpd\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.935582 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.936451 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-scripts\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.939066 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-config-data\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:18.951786 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fbht\" (UniqueName: \"kubernetes.io/projected/353e8adf-9611-450e-ac07-270a2550ee0e-kube-api-access-7fbht\") pod \"ceilometer-0\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.242855 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.378349 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2b2c88bb-9134-46aa-8595-4762fca3fb57","Type":"ContainerStarted","Data":"7da868d27f000321d26a9b7c631242fcf9807dfe0be4e3eed0842af6472547dd"} Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.779538 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.532700562 podStartE2EDuration="50.779514402s" podCreationTimestamp="2025-12-10 14:55:29 +0000 UTC" firstStartedPulling="2025-12-10 14:56:06.270312601 +0000 UTC m=+1470.465087143" lastFinishedPulling="2025-12-10 14:56:15.517126441 +0000 UTC m=+1479.711900983" observedRunningTime="2025-12-10 14:56:19.433566125 +0000 UTC m=+1483.628340667" watchObservedRunningTime="2025-12-10 14:56:19.779514402 +0000 UTC m=+1483.974288944" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.809101 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-g5l52"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.857981 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qhqnk"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.873448 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.878430 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.911955 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qhqnk"] Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.979766 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.979831 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.981301 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-config\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.981391 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.981419 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:19 crc kubenswrapper[4727]: I1210 14:56:19.981459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n568k\" (UniqueName: \"kubernetes.io/projected/e70cfc42-b547-4996-81d0-8be6145dc587-kube-api-access-n568k\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: W1210 14:56:20.067845 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64ba26d9_ff0f_4f17_9eb0_a8afb2756bd9.slice/crio-e00bcb8cd09f666247bf10788014891af00484d99d7693edc425c3775decb546 WatchSource:0}: Error finding container e00bcb8cd09f666247bf10788014891af00484d99d7693edc425c3775decb546: Status 404 returned error can't find the container with id e00bcb8cd09f666247bf10788014891af00484d99d7693edc425c3775decb546 Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.096479 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-config\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.096574 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.096600 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.096651 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n568k\" (UniqueName: \"kubernetes.io/projected/e70cfc42-b547-4996-81d0-8be6145dc587-kube-api-access-n568k\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.096712 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.096740 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.097710 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-config\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.099403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.099488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.100140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.110033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.110105 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-g5l52"] Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.150305 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-rnv6s"] Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.246803 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8fgtj"] Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.259306 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n568k\" (UniqueName: \"kubernetes.io/projected/e70cfc42-b547-4996-81d0-8be6145dc587-kube-api-access-n568k\") pod \"dnsmasq-dns-58dd9ff6bc-qhqnk\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.323636 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-l9hd6"] Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.338678 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8s8pt"] Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.421307 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nzrms"] Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.461728 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-nzrms" event={"ID":"21b24a6e-1a57-447a-9452-e9644eb543b3","Type":"ContainerStarted","Data":"09bee21b784ad22d0c52c58f207a7419ebaeb1d2668f973f2a56a3552ec5f56b"} Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.481123 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8s8pt" event={"ID":"6a7dc74a-65ab-4440-b4d0-33c102b7baeb","Type":"ContainerStarted","Data":"8754daf97a17deac9ac170b6c2924b6a4090218879e52d936f2bff618459f8f9"} Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.482306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-l9hd6" event={"ID":"672a3a2e-19cb-4512-a908-c8d6f16753f7","Type":"ContainerStarted","Data":"e8e86c4527984486f107dea205078b1287cd2f69d04afe0746a0c6eb1efab590"} Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.484570 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2479h"] Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.506776 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vrmr7" event={"ID":"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43","Type":"ContainerStarted","Data":"fc09d9e9bba72267a4c120952f83ec952784ba88225a64b0528220c6434fbfc1"} Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.509008 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vrmr7"] Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.519734 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.521949 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" event={"ID":"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9","Type":"ContainerStarted","Data":"e00bcb8cd09f666247bf10788014891af00484d99d7693edc425c3775decb546"} Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.522252 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.540196 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2479h" event={"ID":"5d1bf00b-226f-4103-b12d-551e0974c8da","Type":"ContainerStarted","Data":"c880c6d45bafd1dd52d96adae92a2c2280301834f96d19101bcb5ee802ba83d7"} Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.560362 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-rnv6s" event={"ID":"8ef9b442-2a15-4657-b111-5af4a72d39e4","Type":"ContainerStarted","Data":"c30c0c741754a30ceaa30e94b34871514df7832945ca996444bbc1fb1e2953e0"} Dec 10 14:56:20 crc kubenswrapper[4727]: I1210 14:56:20.562800 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8fgtj" event={"ID":"eb7744ce-28ea-4e2f-a20e-a925b562e221","Type":"ContainerStarted","Data":"629223e64f422fc6aba151b448806bb028697a80021709e1c572272814a876c1"} Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.038053 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.388996 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qhqnk"] Dec 10 14:56:21 crc kubenswrapper[4727]: W1210 14:56:21.457784 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode70cfc42_b547_4996_81d0_8be6145dc587.slice/crio-077b1d20c3cf5d3ef01d4a993d13e329a652543290d57bbb81b42851c88a67da WatchSource:0}: Error finding container 077b1d20c3cf5d3ef01d4a993d13e329a652543290d57bbb81b42851c88a67da: Status 404 returned error can't find the container with id 077b1d20c3cf5d3ef01d4a993d13e329a652543290d57bbb81b42851c88a67da Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.604314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353e8adf-9611-450e-ac07-270a2550ee0e","Type":"ContainerStarted","Data":"a070dc413d79ba3cea18d4b0fa5ff63e7a7bdaeca1560387505c4e2afa9e88c7"} Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.613058 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-nzrms" event={"ID":"21b24a6e-1a57-447a-9452-e9644eb543b3","Type":"ContainerStarted","Data":"88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f"} Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.613204 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f877ddd87-nzrms" podUID="21b24a6e-1a57-447a-9452-e9644eb543b3" containerName="init" containerID="cri-o://88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f" gracePeriod=10 Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.626920 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-l9hd6" event={"ID":"672a3a2e-19cb-4512-a908-c8d6f16753f7","Type":"ContainerStarted","Data":"8678a2d1d4114f78beffb7d2645739e471a7dcdab2fa6f00021e307baeffa308"} Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.632605 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d61f7e3d-578d-4429-ad9c-31ba3e8be091","Type":"ContainerStarted","Data":"6fe5490bfa7a5d52aff03aeca876aeef7be445f1a29e88d77ce3fdc765913148"} Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.641310 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" event={"ID":"e70cfc42-b547-4996-81d0-8be6145dc587","Type":"ContainerStarted","Data":"077b1d20c3cf5d3ef01d4a993d13e329a652543290d57bbb81b42851c88a67da"} Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.648325 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" podUID="64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" containerName="init" containerID="cri-o://4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4" gracePeriod=10 Dec 10 14:56:21 crc kubenswrapper[4727]: I1210 14:56:21.684525 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-l9hd6" podStartSLOduration=4.684502925 podStartE2EDuration="4.684502925s" podCreationTimestamp="2025-12-10 14:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:21.681758285 +0000 UTC m=+1485.876532827" watchObservedRunningTime="2025-12-10 14:56:21.684502925 +0000 UTC m=+1485.879277467" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.395233 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.413488 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.423038 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-dns-svc\") pod \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.423277 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-nb\") pod \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.423381 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qvkt\" (UniqueName: \"kubernetes.io/projected/21b24a6e-1a57-447a-9452-e9644eb543b3-kube-api-access-2qvkt\") pod \"21b24a6e-1a57-447a-9452-e9644eb543b3\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.423487 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-dns-svc\") pod \"21b24a6e-1a57-447a-9452-e9644eb543b3\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.431101 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-nb\") pod \"21b24a6e-1a57-447a-9452-e9644eb543b3\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.431376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-config\") pod \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.431511 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-config\") pod \"21b24a6e-1a57-447a-9452-e9644eb543b3\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.431691 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-sb\") pod \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.431802 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-sb\") pod \"21b24a6e-1a57-447a-9452-e9644eb543b3\" (UID: \"21b24a6e-1a57-447a-9452-e9644eb543b3\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.431919 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x75w\" (UniqueName: \"kubernetes.io/projected/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-kube-api-access-8x75w\") pod \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.443674 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b24a6e-1a57-447a-9452-e9644eb543b3-kube-api-access-2qvkt" (OuterVolumeSpecName: "kube-api-access-2qvkt") pod "21b24a6e-1a57-447a-9452-e9644eb543b3" (UID: "21b24a6e-1a57-447a-9452-e9644eb543b3"). InnerVolumeSpecName "kube-api-access-2qvkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.446934 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-kube-api-access-8x75w" (OuterVolumeSpecName: "kube-api-access-8x75w") pod "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" (UID: "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9"). InnerVolumeSpecName "kube-api-access-8x75w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.563723 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21b24a6e-1a57-447a-9452-e9644eb543b3" (UID: "21b24a6e-1a57-447a-9452-e9644eb543b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.567403 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x75w\" (UniqueName: \"kubernetes.io/projected/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-kube-api-access-8x75w\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.567897 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qvkt\" (UniqueName: \"kubernetes.io/projected/21b24a6e-1a57-447a-9452-e9644eb543b3-kube-api-access-2qvkt\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.601683 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21b24a6e-1a57-447a-9452-e9644eb543b3" (UID: "21b24a6e-1a57-447a-9452-e9644eb543b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.629637 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" (UID: "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.651836 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21b24a6e-1a57-447a-9452-e9644eb543b3" (UID: "21b24a6e-1a57-447a-9452-e9644eb543b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.651863 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-config" (OuterVolumeSpecName: "config") pod "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" (UID: "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.652647 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" (UID: "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.669628 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.669658 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.669668 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.669677 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.669685 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.669694 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:22 crc kubenswrapper[4727]: E1210 14:56:22.671241 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-nb podName:64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9 nodeName:}" failed. No retries permitted until 2025-12-10 14:56:23.171203815 +0000 UTC m=+1487.365978357 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-nb" (UniqueName: "kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-nb") pod "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" (UID: "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9") : error deleting /var/lib/kubelet/pods/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9/volume-subpaths: remove /var/lib/kubelet/pods/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9/volume-subpaths: no such file or directory Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.678536 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-config" (OuterVolumeSpecName: "config") pod "21b24a6e-1a57-447a-9452-e9644eb543b3" (UID: "21b24a6e-1a57-447a-9452-e9644eb543b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.737312 4727 generic.go:334] "Generic (PLEG): container finished" podID="21b24a6e-1a57-447a-9452-e9644eb543b3" containerID="88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f" exitCode=0 Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.737373 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-nzrms" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.737411 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-nzrms" event={"ID":"21b24a6e-1a57-447a-9452-e9644eb543b3","Type":"ContainerDied","Data":"88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f"} Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.737439 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-nzrms" event={"ID":"21b24a6e-1a57-447a-9452-e9644eb543b3","Type":"ContainerDied","Data":"09bee21b784ad22d0c52c58f207a7419ebaeb1d2668f973f2a56a3552ec5f56b"} Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.737455 4727 scope.go:117] "RemoveContainer" containerID="88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.756312 4727 generic.go:334] "Generic (PLEG): container finished" podID="e70cfc42-b547-4996-81d0-8be6145dc587" containerID="0c2bf73919103062c830ef5f6142fe389b298690a3989a36ab70417276ff9d32" exitCode=0 Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.759192 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" event={"ID":"e70cfc42-b547-4996-81d0-8be6145dc587","Type":"ContainerDied","Data":"0c2bf73919103062c830ef5f6142fe389b298690a3989a36ab70417276ff9d32"} Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.773867 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b24a6e-1a57-447a-9452-e9644eb543b3-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.798882 4727 generic.go:334] "Generic (PLEG): container finished" podID="64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" containerID="4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4" exitCode=0 Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.799019 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.799562 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" event={"ID":"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9","Type":"ContainerDied","Data":"4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4"} Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.799605 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-g5l52" event={"ID":"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9","Type":"ContainerDied","Data":"e00bcb8cd09f666247bf10788014891af00484d99d7693edc425c3775decb546"} Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.817706 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2479h" event={"ID":"5d1bf00b-226f-4103-b12d-551e0974c8da","Type":"ContainerStarted","Data":"9a0cfea999642f666c7f3840c0d1ad1186ef70157a8110ed8ee9d0a6a56acdd7"} Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.893984 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nzrms"] Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.905489 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2479h" podStartSLOduration=5.905470362 podStartE2EDuration="5.905470362s" podCreationTimestamp="2025-12-10 14:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:22.89431063 +0000 UTC m=+1487.089085172" watchObservedRunningTime="2025-12-10 14:56:22.905470362 +0000 UTC m=+1487.100244904" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.905806 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nzrms"] Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.910110 4727 scope.go:117] "RemoveContainer" containerID="88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f" Dec 10 14:56:22 crc kubenswrapper[4727]: E1210 14:56:22.910802 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f\": container with ID starting with 88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f not found: ID does not exist" containerID="88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.910833 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f"} err="failed to get container status \"88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f\": rpc error: code = NotFound desc = could not find container \"88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f\": container with ID starting with 88b7e734914e794214e2aebef503156d1765f95ae9448c39f627466c3da7ec9f not found: ID does not exist" Dec 10 14:56:22 crc kubenswrapper[4727]: I1210 14:56:22.910859 4727 scope.go:117] "RemoveContainer" containerID="4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4" Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.003457 4727 scope.go:117] "RemoveContainer" containerID="4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4" Dec 10 14:56:23 crc kubenswrapper[4727]: E1210 14:56:23.009212 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4\": container with ID starting with 4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4 not found: ID does not exist" containerID="4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4" Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.009262 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4"} err="failed to get container status \"4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4\": rpc error: code = NotFound desc = could not find container \"4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4\": container with ID starting with 4cd1f57722bcb8a0de8c1cc229ba372ddffd0839e4da6f8c404dc4b5252e6ed4 not found: ID does not exist" Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.107364 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.198241 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-nb\") pod \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\" (UID: \"64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9\") " Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.200103 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" (UID: "64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.300340 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.347000 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.409671 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flmw8"] Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.522009 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-g5l52"] Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.542244 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-g5l52"] Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.881197 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" event={"ID":"e70cfc42-b547-4996-81d0-8be6145dc587","Type":"ContainerStarted","Data":"ba9cb98be55ac28e6a5883d5930f75e1a2415c4669ba49a2f6c5a9434d8b9f56"} Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.881386 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:23 crc kubenswrapper[4727]: I1210 14:56:23.919817 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" podStartSLOduration=4.919789909 podStartE2EDuration="4.919789909s" podCreationTimestamp="2025-12-10 14:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:23.911088949 +0000 UTC m=+1488.105863491" watchObservedRunningTime="2025-12-10 14:56:23.919789909 +0000 UTC m=+1488.114564461" Dec 10 14:56:24 crc kubenswrapper[4727]: I1210 14:56:24.580265 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b24a6e-1a57-447a-9452-e9644eb543b3" path="/var/lib/kubelet/pods/21b24a6e-1a57-447a-9452-e9644eb543b3/volumes" Dec 10 14:56:24 crc kubenswrapper[4727]: I1210 14:56:24.581056 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" path="/var/lib/kubelet/pods/64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9/volumes" Dec 10 14:56:24 crc kubenswrapper[4727]: I1210 14:56:24.894213 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-flmw8" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerName="registry-server" containerID="cri-o://c7976b327b1eafbdb39babe9616c355feedcb39f40775f6622b3b81f69cc4f0b" gracePeriod=2 Dec 10 14:56:25 crc kubenswrapper[4727]: I1210 14:56:25.934762 4727 generic.go:334] "Generic (PLEG): container finished" podID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerID="c7976b327b1eafbdb39babe9616c355feedcb39f40775f6622b3b81f69cc4f0b" exitCode=0 Dec 10 14:56:25 crc kubenswrapper[4727]: I1210 14:56:25.935122 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flmw8" event={"ID":"e8436452-f31c-4b3b-9c04-2fddf8668b1e","Type":"ContainerDied","Data":"c7976b327b1eafbdb39babe9616c355feedcb39f40775f6622b3b81f69cc4f0b"} Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.306701 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.338956 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-utilities\") pod \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.339238 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpmkw\" (UniqueName: \"kubernetes.io/projected/e8436452-f31c-4b3b-9c04-2fddf8668b1e-kube-api-access-gpmkw\") pod \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.339385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-catalog-content\") pod \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\" (UID: \"e8436452-f31c-4b3b-9c04-2fddf8668b1e\") " Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.339874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-utilities" (OuterVolumeSpecName: "utilities") pod "e8436452-f31c-4b3b-9c04-2fddf8668b1e" (UID: "e8436452-f31c-4b3b-9c04-2fddf8668b1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.361550 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8436452-f31c-4b3b-9c04-2fddf8668b1e-kube-api-access-gpmkw" (OuterVolumeSpecName: "kube-api-access-gpmkw") pod "e8436452-f31c-4b3b-9c04-2fddf8668b1e" (UID: "e8436452-f31c-4b3b-9c04-2fddf8668b1e"). InnerVolumeSpecName "kube-api-access-gpmkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.441160 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.441453 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpmkw\" (UniqueName: \"kubernetes.io/projected/e8436452-f31c-4b3b-9c04-2fddf8668b1e-kube-api-access-gpmkw\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.479640 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8436452-f31c-4b3b-9c04-2fddf8668b1e" (UID: "e8436452-f31c-4b3b-9c04-2fddf8668b1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.543823 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8436452-f31c-4b3b-9c04-2fddf8668b1e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.962348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flmw8" event={"ID":"e8436452-f31c-4b3b-9c04-2fddf8668b1e","Type":"ContainerDied","Data":"fd26376fbccc283124f74e13a63d0231b1717878cd55424075873a14fd7dfaec"} Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.962445 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flmw8" Dec 10 14:56:26 crc kubenswrapper[4727]: I1210 14:56:26.962465 4727 scope.go:117] "RemoveContainer" containerID="c7976b327b1eafbdb39babe9616c355feedcb39f40775f6622b3b81f69cc4f0b" Dec 10 14:56:27 crc kubenswrapper[4727]: I1210 14:56:27.013116 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flmw8"] Dec 10 14:56:27 crc kubenswrapper[4727]: I1210 14:56:27.024016 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-flmw8"] Dec 10 14:56:28 crc kubenswrapper[4727]: I1210 14:56:28.577283 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" path="/var/lib/kubelet/pods/e8436452-f31c-4b3b-9c04-2fddf8668b1e/volumes" Dec 10 14:56:29 crc kubenswrapper[4727]: I1210 14:56:29.002295 4727 generic.go:334] "Generic (PLEG): container finished" podID="5d1bf00b-226f-4103-b12d-551e0974c8da" containerID="9a0cfea999642f666c7f3840c0d1ad1186ef70157a8110ed8ee9d0a6a56acdd7" exitCode=0 Dec 10 14:56:29 crc kubenswrapper[4727]: I1210 14:56:29.002750 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2479h" event={"ID":"5d1bf00b-226f-4103-b12d-551e0974c8da","Type":"ContainerDied","Data":"9a0cfea999642f666c7f3840c0d1ad1186ef70157a8110ed8ee9d0a6a56acdd7"} Dec 10 14:56:29 crc kubenswrapper[4727]: I1210 14:56:29.009088 4727 generic.go:334] "Generic (PLEG): container finished" podID="d61f7e3d-578d-4429-ad9c-31ba3e8be091" containerID="6fe5490bfa7a5d52aff03aeca876aeef7be445f1a29e88d77ce3fdc765913148" exitCode=0 Dec 10 14:56:29 crc kubenswrapper[4727]: I1210 14:56:29.009137 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d61f7e3d-578d-4429-ad9c-31ba3e8be091","Type":"ContainerDied","Data":"6fe5490bfa7a5d52aff03aeca876aeef7be445f1a29e88d77ce3fdc765913148"} Dec 10 14:56:30 crc kubenswrapper[4727]: I1210 14:56:30.525153 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:56:30 crc kubenswrapper[4727]: I1210 14:56:30.620217 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cvb6b"] Dec 10 14:56:30 crc kubenswrapper[4727]: I1210 14:56:30.620498 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-cvb6b" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" containerID="cri-o://a265938aa62626947a4ea3b94cf185a3c9b53622cc6ae04cc492674f48e32caf" gracePeriod=10 Dec 10 14:56:32 crc kubenswrapper[4727]: I1210 14:56:32.053279 4727 generic.go:334] "Generic (PLEG): container finished" podID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerID="a265938aa62626947a4ea3b94cf185a3c9b53622cc6ae04cc492674f48e32caf" exitCode=0 Dec 10 14:56:32 crc kubenswrapper[4727]: I1210 14:56:32.053348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cvb6b" event={"ID":"3ba9cb5c-65f9-4733-a32c-018aa65c9a40","Type":"ContainerDied","Data":"a265938aa62626947a4ea3b94cf185a3c9b53622cc6ae04cc492674f48e32caf"} Dec 10 14:56:34 crc kubenswrapper[4727]: I1210 14:56:34.393684 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-cvb6b" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Dec 10 14:56:34 crc kubenswrapper[4727]: I1210 14:56:34.752726 4727 scope.go:117] "RemoveContainer" containerID="1fc02b4b791ff371a7d0d6d30185eaad8421a39fae49a8ca72d275cf2315a9de" Dec 10 14:56:36 crc kubenswrapper[4727]: E1210 14:56:36.356244 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 10 14:56:36 crc kubenswrapper[4727]: E1210 14:56:36.356620 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr5mn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-vrmr7_openstack(fe1d20cc-47dd-4803-a7dd-36e43d2f2d43): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:56:36 crc kubenswrapper[4727]: E1210 14:56:36.357829 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-vrmr7" podUID="fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" Dec 10 14:56:37 crc kubenswrapper[4727]: E1210 14:56:37.117232 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-vrmr7" podUID="fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" Dec 10 14:56:37 crc kubenswrapper[4727]: I1210 14:56:37.733995 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:56:37 crc kubenswrapper[4727]: I1210 14:56:37.734133 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:56:37 crc kubenswrapper[4727]: I1210 14:56:37.734228 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 14:56:37 crc kubenswrapper[4727]: I1210 14:56:37.735808 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b2d38dbef40e687b846e527a023d87ca5607929b3a7329d3334ac26ab387fb1"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:56:37 crc kubenswrapper[4727]: I1210 14:56:37.735932 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://9b2d38dbef40e687b846e527a023d87ca5607929b3a7329d3334ac26ab387fb1" gracePeriod=600 Dec 10 14:56:38 crc kubenswrapper[4727]: I1210 14:56:38.128847 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="9b2d38dbef40e687b846e527a023d87ca5607929b3a7329d3334ac26ab387fb1" exitCode=0 Dec 10 14:56:38 crc kubenswrapper[4727]: I1210 14:56:38.128975 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"9b2d38dbef40e687b846e527a023d87ca5607929b3a7329d3334ac26ab387fb1"} Dec 10 14:56:39 crc kubenswrapper[4727]: I1210 14:56:39.393781 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-cvb6b" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Dec 10 14:56:44 crc kubenswrapper[4727]: I1210 14:56:44.394747 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-cvb6b" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Dec 10 14:56:44 crc kubenswrapper[4727]: I1210 14:56:44.395438 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:56:49 crc kubenswrapper[4727]: I1210 14:56:49.394634 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-cvb6b" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Dec 10 14:56:54 crc kubenswrapper[4727]: I1210 14:56:54.394374 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-cvb6b" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.341288 4727 generic.go:334] "Generic (PLEG): container finished" podID="20c3a0fe-e0f7-4f79-ae22-d143511424e9" containerID="d92b4d40fd1384c1b441e23abb935ba80a45e0204cb1ef208abc372e63d83fc4" exitCode=0 Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.341363 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zb5mz" event={"ID":"20c3a0fe-e0f7-4f79-ae22-d143511424e9","Type":"ContainerDied","Data":"d92b4d40fd1384c1b441e23abb935ba80a45e0204cb1ef208abc372e63d83fc4"} Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.520542 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.794460 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-fernet-keys\") pod \"5d1bf00b-226f-4103-b12d-551e0974c8da\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.794538 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-scripts\") pod \"5d1bf00b-226f-4103-b12d-551e0974c8da\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.794634 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-combined-ca-bundle\") pod \"5d1bf00b-226f-4103-b12d-551e0974c8da\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.794674 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9m5w\" (UniqueName: \"kubernetes.io/projected/5d1bf00b-226f-4103-b12d-551e0974c8da-kube-api-access-z9m5w\") pod \"5d1bf00b-226f-4103-b12d-551e0974c8da\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.794734 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-credential-keys\") pod \"5d1bf00b-226f-4103-b12d-551e0974c8da\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.794818 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-config-data\") pod \"5d1bf00b-226f-4103-b12d-551e0974c8da\" (UID: \"5d1bf00b-226f-4103-b12d-551e0974c8da\") " Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.801798 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5d1bf00b-226f-4103-b12d-551e0974c8da" (UID: "5d1bf00b-226f-4103-b12d-551e0974c8da"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.803271 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5d1bf00b-226f-4103-b12d-551e0974c8da" (UID: "5d1bf00b-226f-4103-b12d-551e0974c8da"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.803624 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-scripts" (OuterVolumeSpecName: "scripts") pod "5d1bf00b-226f-4103-b12d-551e0974c8da" (UID: "5d1bf00b-226f-4103-b12d-551e0974c8da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.816991 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d1bf00b-226f-4103-b12d-551e0974c8da-kube-api-access-z9m5w" (OuterVolumeSpecName: "kube-api-access-z9m5w") pod "5d1bf00b-226f-4103-b12d-551e0974c8da" (UID: "5d1bf00b-226f-4103-b12d-551e0974c8da"). InnerVolumeSpecName "kube-api-access-z9m5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.840225 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d1bf00b-226f-4103-b12d-551e0974c8da" (UID: "5d1bf00b-226f-4103-b12d-551e0974c8da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.849530 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-config-data" (OuterVolumeSpecName: "config-data") pod "5d1bf00b-226f-4103-b12d-551e0974c8da" (UID: "5d1bf00b-226f-4103-b12d-551e0974c8da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.897683 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.897725 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9m5w\" (UniqueName: \"kubernetes.io/projected/5d1bf00b-226f-4103-b12d-551e0974c8da-kube-api-access-z9m5w\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.897740 4727 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.897751 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.897763 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:57 crc kubenswrapper[4727]: I1210 14:56:57.897773 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1bf00b-226f-4103-b12d-551e0974c8da-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:58 crc kubenswrapper[4727]: E1210 14:56:58.073834 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 10 14:56:58 crc kubenswrapper[4727]: E1210 14:56:58.074400 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54bh5d9h5fch5bhd4h675h658h689hbbh554h647h55bh5bch554h65dh8h594hd9h9ch4h5d8h69h94h7h65bh8ch7bh585h596h67h87h67q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fbht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(353e8adf-9611-450e-ac07-270a2550ee0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.352897 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2479h" event={"ID":"5d1bf00b-226f-4103-b12d-551e0974c8da","Type":"ContainerDied","Data":"c880c6d45bafd1dd52d96adae92a2c2280301834f96d19101bcb5ee802ba83d7"} Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.352971 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c880c6d45bafd1dd52d96adae92a2c2280301834f96d19101bcb5ee802ba83d7" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.352927 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2479h" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.636831 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2479h"] Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.648092 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2479h"] Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.733663 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qphqt"] Dec 10 14:56:58 crc kubenswrapper[4727]: E1210 14:56:58.734240 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b24a6e-1a57-447a-9452-e9644eb543b3" containerName="init" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734263 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b24a6e-1a57-447a-9452-e9644eb543b3" containerName="init" Dec 10 14:56:58 crc kubenswrapper[4727]: E1210 14:56:58.734280 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerName="extract-utilities" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734289 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerName="extract-utilities" Dec 10 14:56:58 crc kubenswrapper[4727]: E1210 14:56:58.734313 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerName="extract-content" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734319 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerName="extract-content" Dec 10 14:56:58 crc kubenswrapper[4727]: E1210 14:56:58.734331 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerName="registry-server" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734337 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerName="registry-server" Dec 10 14:56:58 crc kubenswrapper[4727]: E1210 14:56:58.734349 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1bf00b-226f-4103-b12d-551e0974c8da" containerName="keystone-bootstrap" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734357 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1bf00b-226f-4103-b12d-551e0974c8da" containerName="keystone-bootstrap" Dec 10 14:56:58 crc kubenswrapper[4727]: E1210 14:56:58.734376 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" containerName="init" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734382 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" containerName="init" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734578 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ba26d9-ff0f-4f17-9eb0-a8afb2756bd9" containerName="init" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734591 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1bf00b-226f-4103-b12d-551e0974c8da" containerName="keystone-bootstrap" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734603 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8436452-f31c-4b3b-9c04-2fddf8668b1e" containerName="registry-server" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.734613 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b24a6e-1a57-447a-9452-e9644eb543b3" containerName="init" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.735772 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.739188 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.740122 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.740335 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pw9zq" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.740562 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.751419 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qphqt"] Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.920205 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-config-data\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.920282 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-fernet-keys\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.920317 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-combined-ca-bundle\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.920365 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-scripts\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.920729 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpz7z\" (UniqueName: \"kubernetes.io/projected/63d40623-45b4-447b-a237-36d83666ad4d-kube-api-access-mpz7z\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:58 crc kubenswrapper[4727]: I1210 14:56:58.921365 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-credential-keys\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.023815 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-config-data\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.023867 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-fernet-keys\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.023896 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-combined-ca-bundle\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.023946 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-scripts\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.024005 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpz7z\" (UniqueName: \"kubernetes.io/projected/63d40623-45b4-447b-a237-36d83666ad4d-kube-api-access-mpz7z\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.024071 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-credential-keys\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.028867 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-credential-keys\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.029158 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-scripts\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.029170 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-fernet-keys\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.035603 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-config-data\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.035611 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-combined-ca-bundle\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.053488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpz7z\" (UniqueName: \"kubernetes.io/projected/63d40623-45b4-447b-a237-36d83666ad4d-kube-api-access-mpz7z\") pod \"keystone-bootstrap-qphqt\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:56:59 crc kubenswrapper[4727]: I1210 14:56:59.072067 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:57:00 crc kubenswrapper[4727]: I1210 14:57:00.579536 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d1bf00b-226f-4103-b12d-551e0974c8da" path="/var/lib/kubelet/pods/5d1bf00b-226f-4103-b12d-551e0974c8da/volumes" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.039658 4727 scope.go:117] "RemoveContainer" containerID="c254cefa609d885f048f3d7acd800e20e378c1ed013856aa15f69e8a417c10f2" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.157233 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.167105 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zb5mz" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.208584 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-dns-svc\") pod \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.208730 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-nb\") pod \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.208868 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-config-data\") pod \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.208924 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-combined-ca-bundle\") pod \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.209000 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47ztr\" (UniqueName: \"kubernetes.io/projected/20c3a0fe-e0f7-4f79-ae22-d143511424e9-kube-api-access-47ztr\") pod \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.209078 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcc9v\" (UniqueName: \"kubernetes.io/projected/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-kube-api-access-jcc9v\") pod \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.209104 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-sb\") pod \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.209174 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-config\") pod \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\" (UID: \"3ba9cb5c-65f9-4733-a32c-018aa65c9a40\") " Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.209210 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-db-sync-config-data\") pod \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\" (UID: \"20c3a0fe-e0f7-4f79-ae22-d143511424e9\") " Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.216052 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "20c3a0fe-e0f7-4f79-ae22-d143511424e9" (UID: "20c3a0fe-e0f7-4f79-ae22-d143511424e9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.218130 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-kube-api-access-jcc9v" (OuterVolumeSpecName: "kube-api-access-jcc9v") pod "3ba9cb5c-65f9-4733-a32c-018aa65c9a40" (UID: "3ba9cb5c-65f9-4733-a32c-018aa65c9a40"). InnerVolumeSpecName "kube-api-access-jcc9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.231299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c3a0fe-e0f7-4f79-ae22-d143511424e9-kube-api-access-47ztr" (OuterVolumeSpecName: "kube-api-access-47ztr") pod "20c3a0fe-e0f7-4f79-ae22-d143511424e9" (UID: "20c3a0fe-e0f7-4f79-ae22-d143511424e9"). InnerVolumeSpecName "kube-api-access-47ztr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.277055 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20c3a0fe-e0f7-4f79-ae22-d143511424e9" (UID: "20c3a0fe-e0f7-4f79-ae22-d143511424e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.289521 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-config-data" (OuterVolumeSpecName: "config-data") pod "20c3a0fe-e0f7-4f79-ae22-d143511424e9" (UID: "20c3a0fe-e0f7-4f79-ae22-d143511424e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.297416 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ba9cb5c-65f9-4733-a32c-018aa65c9a40" (UID: "3ba9cb5c-65f9-4733-a32c-018aa65c9a40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.300300 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ba9cb5c-65f9-4733-a32c-018aa65c9a40" (UID: "3ba9cb5c-65f9-4733-a32c-018aa65c9a40"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.306008 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-config" (OuterVolumeSpecName: "config") pod "3ba9cb5c-65f9-4733-a32c-018aa65c9a40" (UID: "3ba9cb5c-65f9-4733-a32c-018aa65c9a40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.310593 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.310643 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.310661 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47ztr\" (UniqueName: \"kubernetes.io/projected/20c3a0fe-e0f7-4f79-ae22-d143511424e9-kube-api-access-47ztr\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.310678 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcc9v\" (UniqueName: \"kubernetes.io/projected/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-kube-api-access-jcc9v\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.310688 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.310700 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/20c3a0fe-e0f7-4f79-ae22-d143511424e9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.310711 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.310731 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.325081 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ba9cb5c-65f9-4733-a32c-018aa65c9a40" (UID: "3ba9cb5c-65f9-4733-a32c-018aa65c9a40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:01 crc kubenswrapper[4727]: E1210 14:57:01.341706 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 10 14:57:01 crc kubenswrapper[4727]: E1210 14:57:01.341895 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrxxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8s8pt_openstack(6a7dc74a-65ab-4440-b4d0-33c102b7baeb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:57:01 crc kubenswrapper[4727]: E1210 14:57:01.343144 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8s8pt" podUID="6a7dc74a-65ab-4440-b4d0-33c102b7baeb" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.382820 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cvb6b" event={"ID":"3ba9cb5c-65f9-4733-a32c-018aa65c9a40","Type":"ContainerDied","Data":"898aa4ef284ed39908f384d484761df0a37b19c49316728e6b53f3cb5ca097ed"} Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.382875 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cvb6b" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.387365 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zb5mz" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.387375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zb5mz" event={"ID":"20c3a0fe-e0f7-4f79-ae22-d143511424e9","Type":"ContainerDied","Data":"4a443908af31711f482cfdcb573093bdbe455bbc26420e7b222f1ee2cf498cac"} Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.387618 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a443908af31711f482cfdcb573093bdbe455bbc26420e7b222f1ee2cf498cac" Dec 10 14:57:01 crc kubenswrapper[4727]: E1210 14:57:01.392283 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-8s8pt" podUID="6a7dc74a-65ab-4440-b4d0-33c102b7baeb" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.450270 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba9cb5c-65f9-4733-a32c-018aa65c9a40-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.496890 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cvb6b"] Dec 10 14:57:01 crc kubenswrapper[4727]: I1210 14:57:01.506377 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cvb6b"] Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.577519 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" path="/var/lib/kubelet/pods/3ba9cb5c-65f9-4733-a32c-018aa65c9a40/volumes" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.662478 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s8th4"] Dec 10 14:57:02 crc kubenswrapper[4727]: E1210 14:57:02.663670 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.663705 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" Dec 10 14:57:02 crc kubenswrapper[4727]: E1210 14:57:02.663725 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c3a0fe-e0f7-4f79-ae22-d143511424e9" containerName="glance-db-sync" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.663733 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c3a0fe-e0f7-4f79-ae22-d143511424e9" containerName="glance-db-sync" Dec 10 14:57:02 crc kubenswrapper[4727]: E1210 14:57:02.663753 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="init" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.663762 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="init" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.664075 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.664109 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c3a0fe-e0f7-4f79-ae22-d143511424e9" containerName="glance-db-sync" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.665505 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.680892 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.681316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.681348 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-config\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.681380 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.681507 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.681544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pcdm\" (UniqueName: \"kubernetes.io/projected/d0c47fd2-9e71-493d-a5bd-d528703d9942-kube-api-access-6pcdm\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.784224 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.784329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.784358 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-config\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.784386 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.784494 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.784529 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pcdm\" (UniqueName: \"kubernetes.io/projected/d0c47fd2-9e71-493d-a5bd-d528703d9942-kube-api-access-6pcdm\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.794021 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s8th4"] Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.806964 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.809488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.813751 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-config\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.814342 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.815794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:02 crc kubenswrapper[4727]: I1210 14:57:02.834052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pcdm\" (UniqueName: \"kubernetes.io/projected/d0c47fd2-9e71-493d-a5bd-d528703d9942-kube-api-access-6pcdm\") pod \"dnsmasq-dns-785d8bcb8c-s8th4\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.114584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.511353 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.513236 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.517525 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zq7zd" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.517542 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.519249 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.548215 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.600254 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-logs\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.600450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.600551 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.600586 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.600648 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.600686 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.601456 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsh8\" (UniqueName: \"kubernetes.io/projected/e0fcc217-4afd-4074-bdc6-28013bf480e7-kube-api-access-ldsh8\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.703749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-logs\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.703846 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.703930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.703957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.703999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.704046 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.704065 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsh8\" (UniqueName: \"kubernetes.io/projected/e0fcc217-4afd-4074-bdc6-28013bf480e7-kube-api-access-ldsh8\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.705399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-logs\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.707098 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.707150 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db4de05049fa91af9a988d2a2b63e79bbb68f0fa95a3085791268e44688729c7/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.707318 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.710551 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.714927 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.716776 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.741238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsh8\" (UniqueName: \"kubernetes.io/projected/e0fcc217-4afd-4074-bdc6-28013bf480e7-kube-api-access-ldsh8\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.761548 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.835087 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.962111 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.967103 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.970274 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 14:57:03 crc kubenswrapper[4727]: I1210 14:57:03.982932 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.115169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.115285 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.115335 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.115401 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsxsd\" (UniqueName: \"kubernetes.io/projected/12505db5-845e-4ebd-a28e-1022d60f55d5-kube-api-access-hsxsd\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.115454 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.115487 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.117562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.219610 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.219694 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.219726 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.219766 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsxsd\" (UniqueName: \"kubernetes.io/projected/12505db5-845e-4ebd-a28e-1022d60f55d5-kube-api-access-hsxsd\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.219798 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.219821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.219864 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.220847 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.220942 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.221785 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.221812 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a620bc24bbc5ba249cf329bc1ea3ed90fd24230212b99e061c02e82f00eaedd9/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.226016 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.226535 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.236054 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.239962 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsxsd\" (UniqueName: \"kubernetes.io/projected/12505db5-845e-4ebd-a28e-1022d60f55d5-kube-api-access-hsxsd\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.263867 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.292740 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:04 crc kubenswrapper[4727]: I1210 14:57:04.395124 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-cvb6b" podUID="3ba9cb5c-65f9-4733-a32c-018aa65c9a40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: i/o timeout" Dec 10 14:57:05 crc kubenswrapper[4727]: I1210 14:57:05.657386 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:57:05 crc kubenswrapper[4727]: I1210 14:57:05.744003 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:57:08 crc kubenswrapper[4727]: I1210 14:57:08.405093 4727 scope.go:117] "RemoveContainer" containerID="bf0c0cb5db6cbb369cba9f7cbcfb4667db68ee6a05b492c3d6b69303943d84f1" Dec 10 14:57:09 crc kubenswrapper[4727]: I1210 14:57:09.524850 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d61f7e3d-578d-4429-ad9c-31ba3e8be091","Type":"ContainerStarted","Data":"a72d60097b69acff211f2c71490043c57681acd8f1cb50111b0f531aa89e2650"} Dec 10 14:57:10 crc kubenswrapper[4727]: E1210 14:57:10.749644 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Dec 10 14:57:10 crc kubenswrapper[4727]: E1210 14:57:10.750076 4727 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Dec 10 14:57:10 crc kubenswrapper[4727]: E1210 14:57:10.750227 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg2k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-rnv6s_openstack(8ef9b442-2a15-4657-b111-5af4a72d39e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:57:10 crc kubenswrapper[4727]: E1210 14:57:10.751438 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-rnv6s" podUID="8ef9b442-2a15-4657-b111-5af4a72d39e4" Dec 10 14:57:11 crc kubenswrapper[4727]: I1210 14:57:11.225948 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qphqt"] Dec 10 14:57:11 crc kubenswrapper[4727]: E1210 14:57:11.549019 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-rnv6s" podUID="8ef9b442-2a15-4657-b111-5af4a72d39e4" Dec 10 14:57:11 crc kubenswrapper[4727]: W1210 14:57:11.697350 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63d40623_45b4_447b_a237_36d83666ad4d.slice/crio-59f403b8d2800f32e8b8570e86c0230d256ff29b430b2eb6ce5968ee7c1c529a WatchSource:0}: Error finding container 59f403b8d2800f32e8b8570e86c0230d256ff29b430b2eb6ce5968ee7c1c529a: Status 404 returned error can't find the container with id 59f403b8d2800f32e8b8570e86c0230d256ff29b430b2eb6ce5968ee7c1c529a Dec 10 14:57:11 crc kubenswrapper[4727]: I1210 14:57:11.753152 4727 scope.go:117] "RemoveContainer" containerID="a265938aa62626947a4ea3b94cf185a3c9b53622cc6ae04cc492674f48e32caf" Dec 10 14:57:11 crc kubenswrapper[4727]: I1210 14:57:11.978609 4727 scope.go:117] "RemoveContainer" containerID="fcf847f6a0f1a480d25de92d21a632d373bc462220f8f44546664a31fca49631" Dec 10 14:57:12 crc kubenswrapper[4727]: I1210 14:57:12.132385 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s8th4"] Dec 10 14:57:12 crc kubenswrapper[4727]: I1210 14:57:12.322785 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:57:12 crc kubenswrapper[4727]: W1210 14:57:12.328515 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0c47fd2_9e71_493d_a5bd_d528703d9942.slice/crio-4dd245271025db29bcfb22dc5939e83c5718afad57f84a263100dd60504737f6 WatchSource:0}: Error finding container 4dd245271025db29bcfb22dc5939e83c5718afad57f84a263100dd60504737f6: Status 404 returned error can't find the container with id 4dd245271025db29bcfb22dc5939e83c5718afad57f84a263100dd60504737f6 Dec 10 14:57:12 crc kubenswrapper[4727]: W1210 14:57:12.332882 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12505db5_845e_4ebd_a28e_1022d60f55d5.slice/crio-0659b6a7f65b629c40d175301172a44994510d5aff50eabcee0b5d93167b30de WatchSource:0}: Error finding container 0659b6a7f65b629c40d175301172a44994510d5aff50eabcee0b5d93167b30de: Status 404 returned error can't find the container with id 0659b6a7f65b629c40d175301172a44994510d5aff50eabcee0b5d93167b30de Dec 10 14:57:12 crc kubenswrapper[4727]: I1210 14:57:12.584108 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" event={"ID":"d0c47fd2-9e71-493d-a5bd-d528703d9942","Type":"ContainerStarted","Data":"4dd245271025db29bcfb22dc5939e83c5718afad57f84a263100dd60504737f6"} Dec 10 14:57:12 crc kubenswrapper[4727]: I1210 14:57:12.584184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12505db5-845e-4ebd-a28e-1022d60f55d5","Type":"ContainerStarted","Data":"0659b6a7f65b629c40d175301172a44994510d5aff50eabcee0b5d93167b30de"} Dec 10 14:57:12 crc kubenswrapper[4727]: I1210 14:57:12.584197 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qphqt" event={"ID":"63d40623-45b4-447b-a237-36d83666ad4d","Type":"ContainerStarted","Data":"59f403b8d2800f32e8b8570e86c0230d256ff29b430b2eb6ce5968ee7c1c529a"} Dec 10 14:57:13 crc kubenswrapper[4727]: I1210 14:57:13.273148 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:57:13 crc kubenswrapper[4727]: W1210 14:57:13.280664 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0fcc217_4afd_4074_bdc6_28013bf480e7.slice/crio-a957a8e1deb35e8d460dda9bc44e0105a5ecc7b49c446c098b608d1ebab07043 WatchSource:0}: Error finding container a957a8e1deb35e8d460dda9bc44e0105a5ecc7b49c446c098b608d1ebab07043: Status 404 returned error can't find the container with id a957a8e1deb35e8d460dda9bc44e0105a5ecc7b49c446c098b608d1ebab07043 Dec 10 14:57:13 crc kubenswrapper[4727]: I1210 14:57:13.588683 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0fcc217-4afd-4074-bdc6-28013bf480e7","Type":"ContainerStarted","Data":"a957a8e1deb35e8d460dda9bc44e0105a5ecc7b49c446c098b608d1ebab07043"} Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.629051 4727 generic.go:334] "Generic (PLEG): container finished" podID="d0c47fd2-9e71-493d-a5bd-d528703d9942" containerID="355ad17dc465b57c7886da3df982e772cfaa07bb80be416de606b771235b93a9" exitCode=0 Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.629160 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" event={"ID":"d0c47fd2-9e71-493d-a5bd-d528703d9942","Type":"ContainerDied","Data":"355ad17dc465b57c7886da3df982e772cfaa07bb80be416de606b771235b93a9"} Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.633426 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212"} Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.636831 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0fcc217-4afd-4074-bdc6-28013bf480e7","Type":"ContainerStarted","Data":"41334820517a83c7fee537d5ca2a15301a3a6f9410038b1bad95f5d025d90528"} Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.639463 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12505db5-845e-4ebd-a28e-1022d60f55d5","Type":"ContainerStarted","Data":"dbd2dc62bce790f5ffdd6ef43db2629dc08abbd4aec43fb878b1317ceda929d1"} Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.641299 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qphqt" event={"ID":"63d40623-45b4-447b-a237-36d83666ad4d","Type":"ContainerStarted","Data":"377e15f73a8ebc68b55d8e90d2bd4b962536ef4e72aa4937dfe8cb9909b68bcd"} Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.643507 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vrmr7" event={"ID":"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43","Type":"ContainerStarted","Data":"d6cdde58b5106b6ac3a3cc4f817fb81b41401c8bf479545683f4e3dee0f2e5e8"} Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.645178 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8fgtj" event={"ID":"eb7744ce-28ea-4e2f-a20e-a925b562e221","Type":"ContainerStarted","Data":"2d0d9ae6c91f1f9abf72e1dbe18e52380c19f15ca37e014d42f4fce841265487"} Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.673886 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qphqt" podStartSLOduration=16.673795608 podStartE2EDuration="16.673795608s" podCreationTimestamp="2025-12-10 14:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:14.671773427 +0000 UTC m=+1538.866547979" watchObservedRunningTime="2025-12-10 14:57:14.673795608 +0000 UTC m=+1538.868570150" Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.690214 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-8fgtj" podStartSLOduration=15.816592112 podStartE2EDuration="56.690195032s" podCreationTimestamp="2025-12-10 14:56:18 +0000 UTC" firstStartedPulling="2025-12-10 14:56:20.184083469 +0000 UTC m=+1484.378858011" lastFinishedPulling="2025-12-10 14:57:01.057686389 +0000 UTC m=+1525.252460931" observedRunningTime="2025-12-10 14:57:14.689064284 +0000 UTC m=+1538.883838826" watchObservedRunningTime="2025-12-10 14:57:14.690195032 +0000 UTC m=+1538.884969574" Dec 10 14:57:14 crc kubenswrapper[4727]: I1210 14:57:14.710903 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-vrmr7" podStartSLOduration=7.474639341 podStartE2EDuration="57.710876265s" podCreationTimestamp="2025-12-10 14:56:17 +0000 UTC" firstStartedPulling="2025-12-10 14:56:20.411352169 +0000 UTC m=+1484.606126711" lastFinishedPulling="2025-12-10 14:57:10.647589093 +0000 UTC m=+1534.842363635" observedRunningTime="2025-12-10 14:57:14.702799441 +0000 UTC m=+1538.897573983" watchObservedRunningTime="2025-12-10 14:57:14.710876265 +0000 UTC m=+1538.905650807" Dec 10 14:57:21 crc kubenswrapper[4727]: I1210 14:57:21.720118 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d61f7e3d-578d-4429-ad9c-31ba3e8be091","Type":"ContainerStarted","Data":"f2c6cf10ead1b5b265796b4888f35532265227f64af4437a47621b1bd214b11b"} Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.761364 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" event={"ID":"d0c47fd2-9e71-493d-a5bd-d528703d9942","Type":"ContainerStarted","Data":"f5a5d28917582a724ef719c36ecd0b07cd0dc9ffc9b0f6e54bf0f1f2aaff3c8e"} Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.761888 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.764690 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0fcc217-4afd-4074-bdc6-28013bf480e7","Type":"ContainerStarted","Data":"0dbf13d8075a6f38b4305565f8d96d3513116d92dd50d75dff3649120df0702d"} Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.764928 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerName="glance-log" containerID="cri-o://41334820517a83c7fee537d5ca2a15301a3a6f9410038b1bad95f5d025d90528" gracePeriod=30 Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.764969 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerName="glance-httpd" containerID="cri-o://0dbf13d8075a6f38b4305565f8d96d3513116d92dd50d75dff3649120df0702d" gracePeriod=30 Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.768241 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353e8adf-9611-450e-ac07-270a2550ee0e","Type":"ContainerStarted","Data":"82de7cebbfb99eee77179edc0d130a9a3d31f3a941f3bac457a87d1ae0f6795d"} Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.779291 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12505db5-845e-4ebd-a28e-1022d60f55d5","Type":"ContainerStarted","Data":"5fa76fbc7195f54343221081dd71c94a098c33f1f196ec0bcaf580adae313639"} Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.779479 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerName="glance-log" containerID="cri-o://dbd2dc62bce790f5ffdd6ef43db2629dc08abbd4aec43fb878b1317ceda929d1" gracePeriod=30 Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.779601 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerName="glance-httpd" containerID="cri-o://5fa76fbc7195f54343221081dd71c94a098c33f1f196ec0bcaf580adae313639" gracePeriod=30 Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.797288 4727 generic.go:334] "Generic (PLEG): container finished" podID="63d40623-45b4-447b-a237-36d83666ad4d" containerID="377e15f73a8ebc68b55d8e90d2bd4b962536ef4e72aa4937dfe8cb9909b68bcd" exitCode=0 Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.797387 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qphqt" event={"ID":"63d40623-45b4-447b-a237-36d83666ad4d","Type":"ContainerDied","Data":"377e15f73a8ebc68b55d8e90d2bd4b962536ef4e72aa4937dfe8cb9909b68bcd"} Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.805591 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" podStartSLOduration=21.805567142 podStartE2EDuration="21.805567142s" podCreationTimestamp="2025-12-10 14:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:23.796299818 +0000 UTC m=+1547.991074360" watchObservedRunningTime="2025-12-10 14:57:23.805567142 +0000 UTC m=+1548.000341684" Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.810683 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d61f7e3d-578d-4429-ad9c-31ba3e8be091","Type":"ContainerStarted","Data":"92eda431e7c98b312181042469e5e599d51f001136d004b8ae3b9225ea5aebda"} Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.839870 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.839843117 podStartE2EDuration="21.839843117s" podCreationTimestamp="2025-12-10 14:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:23.834898362 +0000 UTC m=+1548.029672904" watchObservedRunningTime="2025-12-10 14:57:23.839843117 +0000 UTC m=+1548.034617669" Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.870839 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.870809759 podStartE2EDuration="21.870809759s" podCreationTimestamp="2025-12-10 14:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:23.860355235 +0000 UTC m=+1548.055129797" watchObservedRunningTime="2025-12-10 14:57:23.870809759 +0000 UTC m=+1548.065584291" Dec 10 14:57:23 crc kubenswrapper[4727]: I1210 14:57:23.897360 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=67.89734092 podStartE2EDuration="1m7.89734092s" podCreationTimestamp="2025-12-10 14:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:23.897274818 +0000 UTC m=+1548.092049380" watchObservedRunningTime="2025-12-10 14:57:23.89734092 +0000 UTC m=+1548.092115472" Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.823860 4727 generic.go:334] "Generic (PLEG): container finished" podID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerID="0dbf13d8075a6f38b4305565f8d96d3513116d92dd50d75dff3649120df0702d" exitCode=0 Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.824052 4727 generic.go:334] "Generic (PLEG): container finished" podID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerID="41334820517a83c7fee537d5ca2a15301a3a6f9410038b1bad95f5d025d90528" exitCode=143 Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.823962 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0fcc217-4afd-4074-bdc6-28013bf480e7","Type":"ContainerDied","Data":"0dbf13d8075a6f38b4305565f8d96d3513116d92dd50d75dff3649120df0702d"} Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.824124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0fcc217-4afd-4074-bdc6-28013bf480e7","Type":"ContainerDied","Data":"41334820517a83c7fee537d5ca2a15301a3a6f9410038b1bad95f5d025d90528"} Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.827833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8s8pt" event={"ID":"6a7dc74a-65ab-4440-b4d0-33c102b7baeb","Type":"ContainerStarted","Data":"117c24f1629e191a6ad2f9a8448ced1e9a50d8ac8f7b8097da25da5915028c21"} Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.832136 4727 generic.go:334] "Generic (PLEG): container finished" podID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerID="5fa76fbc7195f54343221081dd71c94a098c33f1f196ec0bcaf580adae313639" exitCode=0 Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.832167 4727 generic.go:334] "Generic (PLEG): container finished" podID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerID="dbd2dc62bce790f5ffdd6ef43db2629dc08abbd4aec43fb878b1317ceda929d1" exitCode=143 Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.832338 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12505db5-845e-4ebd-a28e-1022d60f55d5","Type":"ContainerDied","Data":"5fa76fbc7195f54343221081dd71c94a098c33f1f196ec0bcaf580adae313639"} Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.832362 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12505db5-845e-4ebd-a28e-1022d60f55d5","Type":"ContainerDied","Data":"dbd2dc62bce790f5ffdd6ef43db2629dc08abbd4aec43fb878b1317ceda929d1"} Dec 10 14:57:24 crc kubenswrapper[4727]: I1210 14:57:24.862865 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8s8pt" podStartSLOduration=5.024535231 podStartE2EDuration="1m7.862842764s" podCreationTimestamp="2025-12-10 14:56:17 +0000 UTC" firstStartedPulling="2025-12-10 14:56:20.204399162 +0000 UTC m=+1484.399173704" lastFinishedPulling="2025-12-10 14:57:23.042706645 +0000 UTC m=+1547.237481237" observedRunningTime="2025-12-10 14:57:24.852976564 +0000 UTC m=+1549.047751116" watchObservedRunningTime="2025-12-10 14:57:24.862842764 +0000 UTC m=+1549.057617306" Dec 10 14:57:26 crc kubenswrapper[4727]: I1210 14:57:26.744172 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.260896 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.366784 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpz7z\" (UniqueName: \"kubernetes.io/projected/63d40623-45b4-447b-a237-36d83666ad4d-kube-api-access-mpz7z\") pod \"63d40623-45b4-447b-a237-36d83666ad4d\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.367321 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-fernet-keys\") pod \"63d40623-45b4-447b-a237-36d83666ad4d\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.367366 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-scripts\") pod \"63d40623-45b4-447b-a237-36d83666ad4d\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.367919 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-credential-keys\") pod \"63d40623-45b4-447b-a237-36d83666ad4d\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.368023 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-combined-ca-bundle\") pod \"63d40623-45b4-447b-a237-36d83666ad4d\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.368306 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-config-data\") pod \"63d40623-45b4-447b-a237-36d83666ad4d\" (UID: \"63d40623-45b4-447b-a237-36d83666ad4d\") " Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.375049 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "63d40623-45b4-447b-a237-36d83666ad4d" (UID: "63d40623-45b4-447b-a237-36d83666ad4d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.375074 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "63d40623-45b4-447b-a237-36d83666ad4d" (UID: "63d40623-45b4-447b-a237-36d83666ad4d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.375158 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d40623-45b4-447b-a237-36d83666ad4d-kube-api-access-mpz7z" (OuterVolumeSpecName: "kube-api-access-mpz7z") pod "63d40623-45b4-447b-a237-36d83666ad4d" (UID: "63d40623-45b4-447b-a237-36d83666ad4d"). InnerVolumeSpecName "kube-api-access-mpz7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.385174 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-scripts" (OuterVolumeSpecName: "scripts") pod "63d40623-45b4-447b-a237-36d83666ad4d" (UID: "63d40623-45b4-447b-a237-36d83666ad4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.403436 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63d40623-45b4-447b-a237-36d83666ad4d" (UID: "63d40623-45b4-447b-a237-36d83666ad4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.404631 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-config-data" (OuterVolumeSpecName: "config-data") pod "63d40623-45b4-447b-a237-36d83666ad4d" (UID: "63d40623-45b4-447b-a237-36d83666ad4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.470531 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.470921 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpz7z\" (UniqueName: \"kubernetes.io/projected/63d40623-45b4-447b-a237-36d83666ad4d-kube-api-access-mpz7z\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.470938 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.470952 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.470963 4727 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.471002 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d40623-45b4-447b-a237-36d83666ad4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.885752 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qphqt" event={"ID":"63d40623-45b4-447b-a237-36d83666ad4d","Type":"ContainerDied","Data":"59f403b8d2800f32e8b8570e86c0230d256ff29b430b2eb6ce5968ee7c1c529a"} Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.885830 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59f403b8d2800f32e8b8570e86c0230d256ff29b430b2eb6ce5968ee7c1c529a" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.885886 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qphqt" Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.901878 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" containerID="d6cdde58b5106b6ac3a3cc4f817fb81b41401c8bf479545683f4e3dee0f2e5e8" exitCode=0 Dec 10 14:57:27 crc kubenswrapper[4727]: I1210 14:57:27.901939 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vrmr7" event={"ID":"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43","Type":"ContainerDied","Data":"d6cdde58b5106b6ac3a3cc4f817fb81b41401c8bf479545683f4e3dee0f2e5e8"} Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.117146 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.195161 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qhqnk"] Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.195443 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" podUID="e70cfc42-b547-4996-81d0-8be6145dc587" containerName="dnsmasq-dns" containerID="cri-o://ba9cb98be55ac28e6a5883d5930f75e1a2415c4669ba49a2f6c5a9434d8b9f56" gracePeriod=10 Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.399666 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-584578f496-pdfh5"] Dec 10 14:57:28 crc kubenswrapper[4727]: E1210 14:57:28.400334 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d40623-45b4-447b-a237-36d83666ad4d" containerName="keystone-bootstrap" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.400356 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d40623-45b4-447b-a237-36d83666ad4d" containerName="keystone-bootstrap" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.400569 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d40623-45b4-447b-a237-36d83666ad4d" containerName="keystone-bootstrap" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.403312 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.407971 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.408374 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.408539 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pw9zq" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.408677 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.408696 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.411854 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.417471 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-584578f496-pdfh5"] Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.495111 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-public-tls-certs\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.495171 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-internal-tls-certs\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.495548 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-credential-keys\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.495630 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-fernet-keys\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.495696 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-combined-ca-bundle\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.495739 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-config-data\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.495769 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9wp\" (UniqueName: \"kubernetes.io/projected/e730f65d-c13d-4603-8db0-ed64afa9584a-kube-api-access-gx9wp\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.496808 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-scripts\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.598360 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-public-tls-certs\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.598403 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-internal-tls-certs\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.598479 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-credential-keys\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.598523 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-fernet-keys\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.598551 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-combined-ca-bundle\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.598578 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-config-data\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.598598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx9wp\" (UniqueName: \"kubernetes.io/projected/e730f65d-c13d-4603-8db0-ed64afa9584a-kube-api-access-gx9wp\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.598626 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-scripts\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.607818 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-internal-tls-certs\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.608200 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-combined-ca-bundle\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.610741 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-public-tls-certs\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.611363 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-credential-keys\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.621468 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-scripts\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.621522 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-config-data\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.626589 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e730f65d-c13d-4603-8db0-ed64afa9584a-fernet-keys\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.630813 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx9wp\" (UniqueName: \"kubernetes.io/projected/e730f65d-c13d-4603-8db0-ed64afa9584a-kube-api-access-gx9wp\") pod \"keystone-584578f496-pdfh5\" (UID: \"e730f65d-c13d-4603-8db0-ed64afa9584a\") " pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.737096 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.917396 4727 generic.go:334] "Generic (PLEG): container finished" podID="e70cfc42-b547-4996-81d0-8be6145dc587" containerID="ba9cb98be55ac28e6a5883d5930f75e1a2415c4669ba49a2f6c5a9434d8b9f56" exitCode=0 Dec 10 14:57:28 crc kubenswrapper[4727]: I1210 14:57:28.917484 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" event={"ID":"e70cfc42-b547-4996-81d0-8be6145dc587","Type":"ContainerDied","Data":"ba9cb98be55ac28e6a5883d5930f75e1a2415c4669ba49a2f6c5a9434d8b9f56"} Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.623825 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.708172 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.785532 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-logs\") pod \"12505db5-845e-4ebd-a28e-1022d60f55d5\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.785622 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsxsd\" (UniqueName: \"kubernetes.io/projected/12505db5-845e-4ebd-a28e-1022d60f55d5-kube-api-access-hsxsd\") pod \"12505db5-845e-4ebd-a28e-1022d60f55d5\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.785646 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-config-data\") pod \"12505db5-845e-4ebd-a28e-1022d60f55d5\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.785687 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-combined-ca-bundle\") pod \"12505db5-845e-4ebd-a28e-1022d60f55d5\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.785728 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-httpd-run\") pod \"12505db5-845e-4ebd-a28e-1022d60f55d5\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.785968 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"12505db5-845e-4ebd-a28e-1022d60f55d5\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.785992 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-scripts\") pod \"12505db5-845e-4ebd-a28e-1022d60f55d5\" (UID: \"12505db5-845e-4ebd-a28e-1022d60f55d5\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.785987 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-logs" (OuterVolumeSpecName: "logs") pod "12505db5-845e-4ebd-a28e-1022d60f55d5" (UID: "12505db5-845e-4ebd-a28e-1022d60f55d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.786954 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "12505db5-845e-4ebd-a28e-1022d60f55d5" (UID: "12505db5-845e-4ebd-a28e-1022d60f55d5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.788402 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.788424 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12505db5-845e-4ebd-a28e-1022d60f55d5-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.792638 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-scripts" (OuterVolumeSpecName: "scripts") pod "12505db5-845e-4ebd-a28e-1022d60f55d5" (UID: "12505db5-845e-4ebd-a28e-1022d60f55d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.801657 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12505db5-845e-4ebd-a28e-1022d60f55d5-kube-api-access-hsxsd" (OuterVolumeSpecName: "kube-api-access-hsxsd") pod "12505db5-845e-4ebd-a28e-1022d60f55d5" (UID: "12505db5-845e-4ebd-a28e-1022d60f55d5"). InnerVolumeSpecName "kube-api-access-hsxsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.806193 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vrmr7" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.847472 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12505db5-845e-4ebd-a28e-1022d60f55d5" (UID: "12505db5-845e-4ebd-a28e-1022d60f55d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.891946 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-scripts\") pod \"e0fcc217-4afd-4074-bdc6-28013bf480e7\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.892041 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-logs\") pod \"e0fcc217-4afd-4074-bdc6-28013bf480e7\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.892156 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"e0fcc217-4afd-4074-bdc6-28013bf480e7\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.892298 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldsh8\" (UniqueName: \"kubernetes.io/projected/e0fcc217-4afd-4074-bdc6-28013bf480e7-kube-api-access-ldsh8\") pod \"e0fcc217-4afd-4074-bdc6-28013bf480e7\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.892326 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-combined-ca-bundle\") pod \"e0fcc217-4afd-4074-bdc6-28013bf480e7\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.892349 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-httpd-run\") pod \"e0fcc217-4afd-4074-bdc6-28013bf480e7\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.892382 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-config-data\") pod \"e0fcc217-4afd-4074-bdc6-28013bf480e7\" (UID: \"e0fcc217-4afd-4074-bdc6-28013bf480e7\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.892876 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.892896 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsxsd\" (UniqueName: \"kubernetes.io/projected/12505db5-845e-4ebd-a28e-1022d60f55d5-kube-api-access-hsxsd\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.892923 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.893901 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e0fcc217-4afd-4074-bdc6-28013bf480e7" (UID: "e0fcc217-4afd-4074-bdc6-28013bf480e7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.894256 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-logs" (OuterVolumeSpecName: "logs") pod "e0fcc217-4afd-4074-bdc6-28013bf480e7" (UID: "e0fcc217-4afd-4074-bdc6-28013bf480e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.895007 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590" (OuterVolumeSpecName: "glance") pod "12505db5-845e-4ebd-a28e-1022d60f55d5" (UID: "12505db5-845e-4ebd-a28e-1022d60f55d5"). InnerVolumeSpecName "pvc-7a322af2-a713-4577-8221-4254467d2590". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.898552 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-scripts" (OuterVolumeSpecName: "scripts") pod "e0fcc217-4afd-4074-bdc6-28013bf480e7" (UID: "e0fcc217-4afd-4074-bdc6-28013bf480e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.900980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fcc217-4afd-4074-bdc6-28013bf480e7-kube-api-access-ldsh8" (OuterVolumeSpecName: "kube-api-access-ldsh8") pod "e0fcc217-4afd-4074-bdc6-28013bf480e7" (UID: "e0fcc217-4afd-4074-bdc6-28013bf480e7"). InnerVolumeSpecName "kube-api-access-ldsh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.920036 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb" (OuterVolumeSpecName: "glance") pod "e0fcc217-4afd-4074-bdc6-28013bf480e7" (UID: "e0fcc217-4afd-4074-bdc6-28013bf480e7"). InnerVolumeSpecName "pvc-0790eff2-014b-4039-a134-0378a633bedb". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.934598 4727 generic.go:334] "Generic (PLEG): container finished" podID="eb7744ce-28ea-4e2f-a20e-a925b562e221" containerID="2d0d9ae6c91f1f9abf72e1dbe18e52380c19f15ca37e014d42f4fce841265487" exitCode=0 Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.934965 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8fgtj" event={"ID":"eb7744ce-28ea-4e2f-a20e-a925b562e221","Type":"ContainerDied","Data":"2d0d9ae6c91f1f9abf72e1dbe18e52380c19f15ca37e014d42f4fce841265487"} Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.941015 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.942183 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0fcc217-4afd-4074-bdc6-28013bf480e7","Type":"ContainerDied","Data":"a957a8e1deb35e8d460dda9bc44e0105a5ecc7b49c446c098b608d1ebab07043"} Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.955501 4727 scope.go:117] "RemoveContainer" containerID="0dbf13d8075a6f38b4305565f8d96d3513116d92dd50d75dff3649120df0702d" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.970318 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12505db5-845e-4ebd-a28e-1022d60f55d5","Type":"ContainerDied","Data":"0659b6a7f65b629c40d175301172a44994510d5aff50eabcee0b5d93167b30de"} Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.970458 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.986625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vrmr7" event={"ID":"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43","Type":"ContainerDied","Data":"fc09d9e9bba72267a4c120952f83ec952784ba88225a64b0528220c6434fbfc1"} Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.986679 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc09d9e9bba72267a4c120952f83ec952784ba88225a64b0528220c6434fbfc1" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.986711 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vrmr7" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.996389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-combined-ca-bundle\") pod \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.996489 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr5mn\" (UniqueName: \"kubernetes.io/projected/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-kube-api-access-fr5mn\") pod \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.996544 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-scripts\") pod \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.996673 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-logs\") pod \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.996750 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-config-data\") pod \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\" (UID: \"fe1d20cc-47dd-4803-a7dd-36e43d2f2d43\") " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.997411 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") on node \"crc\" " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.997432 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.997442 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.997458 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") on node \"crc\" " Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.997477 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldsh8\" (UniqueName: \"kubernetes.io/projected/e0fcc217-4afd-4074-bdc6-28013bf480e7-kube-api-access-ldsh8\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:29 crc kubenswrapper[4727]: I1210 14:57:29.997487 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0fcc217-4afd-4074-bdc6-28013bf480e7-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.003389 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-logs" (OuterVolumeSpecName: "logs") pod "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" (UID: "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.006501 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-kube-api-access-fr5mn" (OuterVolumeSpecName: "kube-api-access-fr5mn") pod "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" (UID: "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43"). InnerVolumeSpecName "kube-api-access-fr5mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.020951 4727 scope.go:117] "RemoveContainer" containerID="41334820517a83c7fee537d5ca2a15301a3a6f9410038b1bad95f5d025d90528" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.021043 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-scripts" (OuterVolumeSpecName: "scripts") pod "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" (UID: "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.040092 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0fcc217-4afd-4074-bdc6-28013bf480e7" (UID: "e0fcc217-4afd-4074-bdc6-28013bf480e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.052335 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-config-data" (OuterVolumeSpecName: "config-data") pod "12505db5-845e-4ebd-a28e-1022d60f55d5" (UID: "12505db5-845e-4ebd-a28e-1022d60f55d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.054409 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.058468 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" (UID: "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.066011 4727 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.066209 4727 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0790eff2-014b-4039-a134-0378a633bedb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb") on node "crc" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.100154 4727 reconciler_common.go:293] "Volume detached for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.100200 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.100217 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr5mn\" (UniqueName: \"kubernetes.io/projected/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-kube-api-access-fr5mn\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.100234 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.100246 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12505db5-845e-4ebd-a28e-1022d60f55d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.100261 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.100272 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.113048 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f9b858948-mx57t"] Dec 10 14:57:30 crc kubenswrapper[4727]: E1210 14:57:30.113714 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerName="glance-httpd" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.113739 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerName="glance-httpd" Dec 10 14:57:30 crc kubenswrapper[4727]: E1210 14:57:30.113768 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerName="glance-log" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.113777 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerName="glance-log" Dec 10 14:57:30 crc kubenswrapper[4727]: E1210 14:57:30.113789 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerName="glance-log" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.113797 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerName="glance-log" Dec 10 14:57:30 crc kubenswrapper[4727]: E1210 14:57:30.113815 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerName="glance-httpd" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.113842 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerName="glance-httpd" Dec 10 14:57:30 crc kubenswrapper[4727]: E1210 14:57:30.113859 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70cfc42-b547-4996-81d0-8be6145dc587" containerName="dnsmasq-dns" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.113867 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70cfc42-b547-4996-81d0-8be6145dc587" containerName="dnsmasq-dns" Dec 10 14:57:30 crc kubenswrapper[4727]: E1210 14:57:30.113882 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70cfc42-b547-4996-81d0-8be6145dc587" containerName="init" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.113889 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70cfc42-b547-4996-81d0-8be6145dc587" containerName="init" Dec 10 14:57:30 crc kubenswrapper[4727]: E1210 14:57:30.113926 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" containerName="placement-db-sync" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.113935 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" containerName="placement-db-sync" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.114188 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerName="glance-log" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.114205 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerName="glance-log" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.114223 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fcc217-4afd-4074-bdc6-28013bf480e7" containerName="glance-httpd" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.114244 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70cfc42-b547-4996-81d0-8be6145dc587" containerName="dnsmasq-dns" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.114259 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="12505db5-845e-4ebd-a28e-1022d60f55d5" containerName="glance-httpd" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.114274 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" containerName="placement-db-sync" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.115650 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.130174 4727 scope.go:117] "RemoveContainer" containerID="5fa76fbc7195f54343221081dd71c94a098c33f1f196ec0bcaf580adae313639" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.142558 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.142830 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.172350 4727 scope.go:117] "RemoveContainer" containerID="dbd2dc62bce790f5ffdd6ef43db2629dc08abbd4aec43fb878b1317ceda929d1" Dec 10 14:57:30 crc kubenswrapper[4727]: W1210 14:57:30.195130 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode730f65d_c13d_4603_8db0_ed64afa9584a.slice/crio-09dc6e4062214224faa39e89ba1bc8e7e732777e8673eaf312c2ee5ca3015fe7 WatchSource:0}: Error finding container 09dc6e4062214224faa39e89ba1bc8e7e732777e8673eaf312c2ee5ca3015fe7: Status 404 returned error can't find the container with id 09dc6e4062214224faa39e89ba1bc8e7e732777e8673eaf312c2ee5ca3015fe7 Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.201818 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-config\") pod \"e70cfc42-b547-4996-81d0-8be6145dc587\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.202276 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-sb\") pod \"e70cfc42-b547-4996-81d0-8be6145dc587\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.202435 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-swift-storage-0\") pod \"e70cfc42-b547-4996-81d0-8be6145dc587\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.202533 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-svc\") pod \"e70cfc42-b547-4996-81d0-8be6145dc587\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.202657 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-nb\") pod \"e70cfc42-b547-4996-81d0-8be6145dc587\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.202788 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n568k\" (UniqueName: \"kubernetes.io/projected/e70cfc42-b547-4996-81d0-8be6145dc587-kube-api-access-n568k\") pod \"e70cfc42-b547-4996-81d0-8be6145dc587\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.203173 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-scripts\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.203269 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0297dbf-60f7-48b2-b36d-3ed437da1944-logs\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.203414 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-config-data\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.203528 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-combined-ca-bundle\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.203662 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-internal-tls-certs\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.203828 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9m7k\" (UniqueName: \"kubernetes.io/projected/f0297dbf-60f7-48b2-b36d-3ed437da1944-kube-api-access-h9m7k\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.203973 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-public-tls-certs\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.212078 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f9b858948-mx57t"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.220295 4727 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.220494 4727 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7a322af2-a713-4577-8221-4254467d2590" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590") on node "crc" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.220545 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-584578f496-pdfh5"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.222277 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-config-data" (OuterVolumeSpecName: "config-data") pod "e0fcc217-4afd-4074-bdc6-28013bf480e7" (UID: "e0fcc217-4afd-4074-bdc6-28013bf480e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.259345 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70cfc42-b547-4996-81d0-8be6145dc587-kube-api-access-n568k" (OuterVolumeSpecName: "kube-api-access-n568k") pod "e70cfc42-b547-4996-81d0-8be6145dc587" (UID: "e70cfc42-b547-4996-81d0-8be6145dc587"). InnerVolumeSpecName "kube-api-access-n568k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.316745 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.319830 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9m7k\" (UniqueName: \"kubernetes.io/projected/f0297dbf-60f7-48b2-b36d-3ed437da1944-kube-api-access-h9m7k\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.319973 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-public-tls-certs\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.320033 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0297dbf-60f7-48b2-b36d-3ed437da1944-logs\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.320056 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-scripts\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.320170 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-config-data\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.320235 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-combined-ca-bundle\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.320341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-internal-tls-certs\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.320490 4727 reconciler_common.go:293] "Volume detached for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.320508 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n568k\" (UniqueName: \"kubernetes.io/projected/e70cfc42-b547-4996-81d0-8be6145dc587-kube-api-access-n568k\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.320519 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fcc217-4afd-4074-bdc6-28013bf480e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.321669 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0297dbf-60f7-48b2-b36d-3ed437da1944-logs\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.333419 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-scripts\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.333464 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-config-data\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.336800 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-internal-tls-certs\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.336879 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.336899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-public-tls-certs\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.343784 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0297dbf-60f7-48b2-b36d-3ed437da1944-combined-ca-bundle\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.368573 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.388028 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.393614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9m7k\" (UniqueName: \"kubernetes.io/projected/f0297dbf-60f7-48b2-b36d-3ed437da1944-kube-api-access-h9m7k\") pod \"placement-5f9b858948-mx57t\" (UID: \"f0297dbf-60f7-48b2-b36d-3ed437da1944\") " pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.412622 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.416107 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.422960 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.428569 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.428966 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zq7zd" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.430180 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.430516 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.466070 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.477997 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.480274 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.494689 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.499691 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.508545 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.527694 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-logs\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.527769 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-config-data\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.527831 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.527860 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.527949 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.528083 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-scripts\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.528131 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srv4k\" (UniqueName: \"kubernetes.io/projected/7eab25e3-c058-4db1-b610-e5394ae0c2c1-kube-api-access-srv4k\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.528339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.615260 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12505db5-845e-4ebd-a28e-1022d60f55d5" path="/var/lib/kubelet/pods/12505db5-845e-4ebd-a28e-1022d60f55d5/volumes" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.616052 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fcc217-4afd-4074-bdc6-28013bf480e7" path="/var/lib/kubelet/pods/e0fcc217-4afd-4074-bdc6-28013bf480e7/volumes" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.629715 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.629777 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.629836 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.629872 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-scripts\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630001 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srv4k\" (UniqueName: \"kubernetes.io/projected/7eab25e3-c058-4db1-b610-e5394ae0c2c1-kube-api-access-srv4k\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630043 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630126 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630211 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65sgr\" (UniqueName: \"kubernetes.io/projected/1c526c7c-ad39-4172-a998-a6935f2522e2-kube-api-access-65sgr\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630239 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-logs\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630259 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-config-data\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630274 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630290 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630313 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630331 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630359 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.630386 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.633658 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.633932 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-logs\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.664018 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-config-data\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.664501 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.669656 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.669692 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db4de05049fa91af9a988d2a2b63e79bbb68f0fa95a3085791268e44688729c7/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.669754 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.670427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-scripts\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.680880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srv4k\" (UniqueName: \"kubernetes.io/projected/7eab25e3-c058-4db1-b610-e5394ae0c2c1-kube-api-access-srv4k\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.733387 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65sgr\" (UniqueName: \"kubernetes.io/projected/1c526c7c-ad39-4172-a998-a6935f2522e2-kube-api-access-65sgr\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.733448 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.733465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.733497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.733553 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.733578 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.733598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.733634 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.734647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.734720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.739650 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.739698 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a620bc24bbc5ba249cf329bc1ea3ed90fd24230212b99e061c02e82f00eaedd9/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.742403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.742777 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.743204 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.756306 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.761595 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65sgr\" (UniqueName: \"kubernetes.io/projected/1c526c7c-ad39-4172-a998-a6935f2522e2-kube-api-access-65sgr\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.850041 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-config" (OuterVolumeSpecName: "config") pod "e70cfc42-b547-4996-81d0-8be6145dc587" (UID: "e70cfc42-b547-4996-81d0-8be6145dc587"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.864064 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-config-data" (OuterVolumeSpecName: "config-data") pod "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" (UID: "fe1d20cc-47dd-4803-a7dd-36e43d2f2d43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.895745 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e70cfc42-b547-4996-81d0-8be6145dc587" (UID: "e70cfc42-b547-4996-81d0-8be6145dc587"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.917694 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e70cfc42-b547-4996-81d0-8be6145dc587" (UID: "e70cfc42-b547-4996-81d0-8be6145dc587"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.922539 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.940476 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e70cfc42-b547-4996-81d0-8be6145dc587" (UID: "e70cfc42-b547-4996-81d0-8be6145dc587"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.940724 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " pod="openstack/glance-default-external-api-0" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.941604 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-swift-storage-0\") pod \"e70cfc42-b547-4996-81d0-8be6145dc587\" (UID: \"e70cfc42-b547-4996-81d0-8be6145dc587\") " Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.943521 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.943560 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.943582 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.943596 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:30 crc kubenswrapper[4727]: W1210 14:57:30.943874 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e70cfc42-b547-4996-81d0-8be6145dc587/volumes/kubernetes.io~configmap/dns-swift-storage-0 Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.943949 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e70cfc42-b547-4996-81d0-8be6145dc587" (UID: "e70cfc42-b547-4996-81d0-8be6145dc587"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:30 crc kubenswrapper[4727]: I1210 14:57:30.962683 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e70cfc42-b547-4996-81d0-8be6145dc587" (UID: "e70cfc42-b547-4996-81d0-8be6145dc587"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.018809 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-rnv6s" event={"ID":"8ef9b442-2a15-4657-b111-5af4a72d39e4","Type":"ContainerStarted","Data":"629cf608530b2484323c36369fbc50bcd816b4833942e8e4535ce7e7bdae36c1"} Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.030749 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353e8adf-9611-450e-ac07-270a2550ee0e","Type":"ContainerStarted","Data":"e6b8adc746b2fe39675d6e3eef97a980b97353ea2505b234937ee868e4901699"} Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.040948 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-rnv6s" podStartSLOduration=3.5528496069999997 podStartE2EDuration="1m13.040894436s" podCreationTimestamp="2025-12-10 14:56:18 +0000 UTC" firstStartedPulling="2025-12-10 14:56:20.15482369 +0000 UTC m=+1484.349598232" lastFinishedPulling="2025-12-10 14:57:29.642868519 +0000 UTC m=+1553.837643061" observedRunningTime="2025-12-10 14:57:31.037972211 +0000 UTC m=+1555.232746753" watchObservedRunningTime="2025-12-10 14:57:31.040894436 +0000 UTC m=+1555.235668978" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.042149 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-584578f496-pdfh5" event={"ID":"e730f65d-c13d-4603-8db0-ed64afa9584a","Type":"ContainerStarted","Data":"09dc6e4062214224faa39e89ba1bc8e7e732777e8673eaf312c2ee5ca3015fe7"} Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.045180 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.045214 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70cfc42-b547-4996-81d0-8be6145dc587-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.047695 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.050324 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qhqnk" event={"ID":"e70cfc42-b547-4996-81d0-8be6145dc587","Type":"ContainerDied","Data":"077b1d20c3cf5d3ef01d4a993d13e329a652543290d57bbb81b42851c88a67da"} Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.050376 4727 scope.go:117] "RemoveContainer" containerID="ba9cb98be55ac28e6a5883d5930f75e1a2415c4669ba49a2f6c5a9434d8b9f56" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.099649 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qhqnk"] Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.104653 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.113497 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qhqnk"] Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.125379 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.144883 4727 scope.go:117] "RemoveContainer" containerID="0c2bf73919103062c830ef5f6142fe389b298690a3989a36ab70417276ff9d32" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.150014 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f9b858948-mx57t"] Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.524927 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.558102 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv7nz\" (UniqueName: \"kubernetes.io/projected/eb7744ce-28ea-4e2f-a20e-a925b562e221-kube-api-access-gv7nz\") pod \"eb7744ce-28ea-4e2f-a20e-a925b562e221\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.558443 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-db-sync-config-data\") pod \"eb7744ce-28ea-4e2f-a20e-a925b562e221\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.558560 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-combined-ca-bundle\") pod \"eb7744ce-28ea-4e2f-a20e-a925b562e221\" (UID: \"eb7744ce-28ea-4e2f-a20e-a925b562e221\") " Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.565966 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eb7744ce-28ea-4e2f-a20e-a925b562e221" (UID: "eb7744ce-28ea-4e2f-a20e-a925b562e221"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.568002 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb7744ce-28ea-4e2f-a20e-a925b562e221-kube-api-access-gv7nz" (OuterVolumeSpecName: "kube-api-access-gv7nz") pod "eb7744ce-28ea-4e2f-a20e-a925b562e221" (UID: "eb7744ce-28ea-4e2f-a20e-a925b562e221"). InnerVolumeSpecName "kube-api-access-gv7nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.611149 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb7744ce-28ea-4e2f-a20e-a925b562e221" (UID: "eb7744ce-28ea-4e2f-a20e-a925b562e221"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.662117 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.662231 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7744ce-28ea-4e2f-a20e-a925b562e221-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.662259 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv7nz\" (UniqueName: \"kubernetes.io/projected/eb7744ce-28ea-4e2f-a20e-a925b562e221-kube-api-access-gv7nz\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.745861 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.770513 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 10 14:57:31 crc kubenswrapper[4727]: I1210 14:57:31.921844 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.031124 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.092424 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8fgtj" event={"ID":"eb7744ce-28ea-4e2f-a20e-a925b562e221","Type":"ContainerDied","Data":"629223e64f422fc6aba151b448806bb028697a80021709e1c572272814a876c1"} Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.092501 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="629223e64f422fc6aba151b448806bb028697a80021709e1c572272814a876c1" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.092637 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8fgtj" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.099594 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-584578f496-pdfh5" event={"ID":"e730f65d-c13d-4603-8db0-ed64afa9584a","Type":"ContainerStarted","Data":"abd6646dd741dd5fde333eb80f03b181d16b91059d7f0ccbea5e3e493a298d66"} Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.100496 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.107641 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c526c7c-ad39-4172-a998-a6935f2522e2","Type":"ContainerStarted","Data":"59d8c8c71b15cea0c91fd17fe2e9e76989c04323fa2a373e286b35ba248f017e"} Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.114685 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f9b858948-mx57t" event={"ID":"f0297dbf-60f7-48b2-b36d-3ed437da1944","Type":"ContainerStarted","Data":"e2d3361d29bfd22aef50a2f2b5a2306cd07d5b685250bcb6fd9b83495eb4c58c"} Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.134224 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-584578f496-pdfh5" podStartSLOduration=4.134199614 podStartE2EDuration="4.134199614s" podCreationTimestamp="2025-12-10 14:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:32.13026144 +0000 UTC m=+1556.325035982" watchObservedRunningTime="2025-12-10 14:57:32.134199614 +0000 UTC m=+1556.328974156" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.138002 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7eab25e3-c058-4db1-b610-e5394ae0c2c1","Type":"ContainerStarted","Data":"b616c5970f826879b00a2c4c722b0594e74952d951ad847d044e7f083da8c380"} Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.163555 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.308517 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-cf8d46dd8-mljgg"] Dec 10 14:57:32 crc kubenswrapper[4727]: E1210 14:57:32.309110 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb7744ce-28ea-4e2f-a20e-a925b562e221" containerName="barbican-db-sync" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.309130 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb7744ce-28ea-4e2f-a20e-a925b562e221" containerName="barbican-db-sync" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.309430 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7744ce-28ea-4e2f-a20e-a925b562e221" containerName="barbican-db-sync" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.323437 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c6cd97dcf-twztd"] Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.325483 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.326450 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.332593 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.333608 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-66mjv" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.338704 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.341984 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-cf8d46dd8-mljgg"] Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.349781 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c6cd97dcf-twztd"] Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.352749 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.525867 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-logs\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.542113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-combined-ca-bundle\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.542203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a20e7837-8316-4dd1-91d5-60d2ea604213-config-data-custom\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.542263 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn4tg\" (UniqueName: \"kubernetes.io/projected/a20e7837-8316-4dd1-91d5-60d2ea604213-kube-api-access-pn4tg\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.542289 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20e7837-8316-4dd1-91d5-60d2ea604213-logs\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.542563 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20e7837-8316-4dd1-91d5-60d2ea604213-combined-ca-bundle\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.542737 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-config-data-custom\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.542847 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqx6k\" (UniqueName: \"kubernetes.io/projected/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-kube-api-access-vqx6k\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.542888 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-config-data\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.543016 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20e7837-8316-4dd1-91d5-60d2ea604213-config-data\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.683604 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e70cfc42-b547-4996-81d0-8be6145dc587" path="/var/lib/kubelet/pods/e70cfc42-b547-4996-81d0-8be6145dc587/volumes" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.697890 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-njbjk"] Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20e7837-8316-4dd1-91d5-60d2ea604213-config-data\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704152 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-logs\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704223 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-combined-ca-bundle\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704280 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a20e7837-8316-4dd1-91d5-60d2ea604213-config-data-custom\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704348 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn4tg\" (UniqueName: \"kubernetes.io/projected/a20e7837-8316-4dd1-91d5-60d2ea604213-kube-api-access-pn4tg\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704381 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20e7837-8316-4dd1-91d5-60d2ea604213-logs\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704464 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20e7837-8316-4dd1-91d5-60d2ea604213-combined-ca-bundle\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704564 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-config-data-custom\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704648 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqx6k\" (UniqueName: \"kubernetes.io/projected/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-kube-api-access-vqx6k\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.704701 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-config-data\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.723676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20e7837-8316-4dd1-91d5-60d2ea604213-logs\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.725383 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-logs\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.737108 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20e7837-8316-4dd1-91d5-60d2ea604213-combined-ca-bundle\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.739052 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.740429 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a20e7837-8316-4dd1-91d5-60d2ea604213-config-data-custom\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.754175 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20e7837-8316-4dd1-91d5-60d2ea604213-config-data\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.754505 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn4tg\" (UniqueName: \"kubernetes.io/projected/a20e7837-8316-4dd1-91d5-60d2ea604213-kube-api-access-pn4tg\") pod \"barbican-keystone-listener-cf8d46dd8-mljgg\" (UID: \"a20e7837-8316-4dd1-91d5-60d2ea604213\") " pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.757029 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-combined-ca-bundle\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.758841 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-config-data\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.769874 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-njbjk"] Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.773967 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-config-data-custom\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.791559 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqx6k\" (UniqueName: \"kubernetes.io/projected/573bf6bf-ba45-4eaf-a331-e11c2696a1ab-kube-api-access-vqx6k\") pod \"barbican-worker-5c6cd97dcf-twztd\" (UID: \"573bf6bf-ba45-4eaf-a331-e11c2696a1ab\") " pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.796517 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-778db78b58-dzbj2"] Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.802667 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.806129 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.808916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.809104 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.809194 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.809230 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fth2t\" (UniqueName: \"kubernetes.io/projected/6f5c8dd7-8982-4b05-9582-6fce29de5659-kube-api-access-fth2t\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.809334 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.809478 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-config\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.833127 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-778db78b58-dzbj2"] Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.916574 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.916725 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.916768 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.916798 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fth2t\" (UniqueName: \"kubernetes.io/projected/6f5c8dd7-8982-4b05-9582-6fce29de5659-kube-api-access-fth2t\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.916855 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-combined-ca-bundle\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.916928 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjt7d\" (UniqueName: \"kubernetes.io/projected/0f601b3c-a4c8-4948-a149-784b4d88a0b4-kube-api-access-bjt7d\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.916992 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.917080 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-config\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.917124 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f601b3c-a4c8-4948-a149-784b4d88a0b4-logs\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.917157 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.917261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data-custom\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.918798 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.920155 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-config\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.935771 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.937341 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.937555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.944052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fth2t\" (UniqueName: \"kubernetes.io/projected/6f5c8dd7-8982-4b05-9582-6fce29de5659-kube-api-access-fth2t\") pod \"dnsmasq-dns-586bdc5f9-njbjk\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:32 crc kubenswrapper[4727]: I1210 14:57:32.994696 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c6cd97dcf-twztd" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.008456 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.019944 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.020036 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-combined-ca-bundle\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.020089 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjt7d\" (UniqueName: \"kubernetes.io/projected/0f601b3c-a4c8-4948-a149-784b4d88a0b4-kube-api-access-bjt7d\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.020167 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f601b3c-a4c8-4948-a149-784b4d88a0b4-logs\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.020227 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data-custom\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.022263 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f601b3c-a4c8-4948-a149-784b4d88a0b4-logs\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.034009 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-combined-ca-bundle\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.043994 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data-custom\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.044941 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.055702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjt7d\" (UniqueName: \"kubernetes.io/projected/0f601b3c-a4c8-4948-a149-784b4d88a0b4-kube-api-access-bjt7d\") pod \"barbican-api-778db78b58-dzbj2\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.172792 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.192510 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:33 crc kubenswrapper[4727]: I1210 14:57:33.198309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f9b858948-mx57t" event={"ID":"f0297dbf-60f7-48b2-b36d-3ed437da1944","Type":"ContainerStarted","Data":"f3b610eab94cb66f555979820018e149e3ed2c6b0cf74af514f18c32cbf33095"} Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.010998 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c6cd97dcf-twztd"] Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.046108 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-cf8d46dd8-mljgg"] Dec 10 14:57:34 crc kubenswrapper[4727]: W1210 14:57:34.076266 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda20e7837_8316_4dd1_91d5_60d2ea604213.slice/crio-5e08fa1f9e314fa6327716991480a6abe052b84cf3bdf439f5c023b900239e62 WatchSource:0}: Error finding container 5e08fa1f9e314fa6327716991480a6abe052b84cf3bdf439f5c023b900239e62: Status 404 returned error can't find the container with id 5e08fa1f9e314fa6327716991480a6abe052b84cf3bdf439f5c023b900239e62 Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.223881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f9b858948-mx57t" event={"ID":"f0297dbf-60f7-48b2-b36d-3ed437da1944","Type":"ContainerStarted","Data":"f986fae909cdfbfcb02aa48f8cba91db2faca00fe0b6f1b660148ffd3bfaa741"} Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.224298 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.271547 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f9b858948-mx57t" podStartSLOduration=4.271523294 podStartE2EDuration="4.271523294s" podCreationTimestamp="2025-12-10 14:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:34.252202513 +0000 UTC m=+1558.446977055" watchObservedRunningTime="2025-12-10 14:57:34.271523294 +0000 UTC m=+1558.466297836" Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.280535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7eab25e3-c058-4db1-b610-e5394ae0c2c1","Type":"ContainerStarted","Data":"ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a"} Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.290322 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c6cd97dcf-twztd" event={"ID":"573bf6bf-ba45-4eaf-a331-e11c2696a1ab","Type":"ContainerStarted","Data":"8082d87bb0cb829390dd326d1916981f6c458e0ca7d446cdf1c8cb6b4c88aec6"} Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.293200 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" event={"ID":"a20e7837-8316-4dd1-91d5-60d2ea604213","Type":"ContainerStarted","Data":"5e08fa1f9e314fa6327716991480a6abe052b84cf3bdf439f5c023b900239e62"} Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.294684 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c526c7c-ad39-4172-a998-a6935f2522e2","Type":"ContainerStarted","Data":"441d56117116acd6aaff1cf86c763a2812fe8795fbe5b9ceebbe9962cc222af4"} Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.306561 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-778db78b58-dzbj2"] Dec 10 14:57:34 crc kubenswrapper[4727]: W1210 14:57:34.309556 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f601b3c_a4c8_4948_a149_784b4d88a0b4.slice/crio-cdb0cebdd95e7059e43ef6da7531666d9c08f41efc7b7f83ae31b78f00253f39 WatchSource:0}: Error finding container cdb0cebdd95e7059e43ef6da7531666d9c08f41efc7b7f83ae31b78f00253f39: Status 404 returned error can't find the container with id cdb0cebdd95e7059e43ef6da7531666d9c08f41efc7b7f83ae31b78f00253f39 Dec 10 14:57:34 crc kubenswrapper[4727]: I1210 14:57:34.339572 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-njbjk"] Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.317770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7eab25e3-c058-4db1-b610-e5394ae0c2c1","Type":"ContainerStarted","Data":"3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6"} Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.334688 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778db78b58-dzbj2" event={"ID":"0f601b3c-a4c8-4948-a149-784b4d88a0b4","Type":"ContainerStarted","Data":"9a5b92a092e819457258571ab100a96e2c2517ff7ab2a193786ca67a8faa5732"} Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.334746 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778db78b58-dzbj2" event={"ID":"0f601b3c-a4c8-4948-a149-784b4d88a0b4","Type":"ContainerStarted","Data":"f0c8141debefa58e1c6d3593250cb651937fad495bc54c7e0a96754370ba8bb2"} Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.334758 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778db78b58-dzbj2" event={"ID":"0f601b3c-a4c8-4948-a149-784b4d88a0b4","Type":"ContainerStarted","Data":"cdb0cebdd95e7059e43ef6da7531666d9c08f41efc7b7f83ae31b78f00253f39"} Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.335142 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.335269 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.343140 4727 generic.go:334] "Generic (PLEG): container finished" podID="6f5c8dd7-8982-4b05-9582-6fce29de5659" containerID="f8fe4d9c1a8bbbf6228060783317dc49068d03b23520b971f0a888488ba37660" exitCode=0 Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.343232 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" event={"ID":"6f5c8dd7-8982-4b05-9582-6fce29de5659","Type":"ContainerDied","Data":"f8fe4d9c1a8bbbf6228060783317dc49068d03b23520b971f0a888488ba37660"} Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.343267 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" event={"ID":"6f5c8dd7-8982-4b05-9582-6fce29de5659","Type":"ContainerStarted","Data":"df82ed8a2c185aaf6c8fc29aeb28d917f659e5d352b53d1808a6230ea51c1032"} Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.378325 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c526c7c-ad39-4172-a998-a6935f2522e2","Type":"ContainerStarted","Data":"b12e9660bf8585f219d40f7f23ec3334f7caba717c11c222a07e84a444bfeeff"} Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.378576 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.371684 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.37165374 podStartE2EDuration="5.37165374s" podCreationTimestamp="2025-12-10 14:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:35.350384953 +0000 UTC m=+1559.545159495" watchObservedRunningTime="2025-12-10 14:57:35.37165374 +0000 UTC m=+1559.566428282" Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.393121 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-778db78b58-dzbj2" podStartSLOduration=3.393092271 podStartE2EDuration="3.393092271s" podCreationTimestamp="2025-12-10 14:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:35.378929141 +0000 UTC m=+1559.573703683" watchObservedRunningTime="2025-12-10 14:57:35.393092271 +0000 UTC m=+1559.587866813" Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.499689 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.499664951 podStartE2EDuration="5.499664951s" podCreationTimestamp="2025-12-10 14:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:35.45824627 +0000 UTC m=+1559.653020822" watchObservedRunningTime="2025-12-10 14:57:35.499664951 +0000 UTC m=+1559.694439493" Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.942601 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f6784f8dd-pcqm7"] Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.945725 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.950446 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.950787 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 10 14:57:35 crc kubenswrapper[4727]: I1210 14:57:35.953514 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f6784f8dd-pcqm7"] Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.121140 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7thg\" (UniqueName: \"kubernetes.io/projected/adb9291c-b698-4aa7-b4c9-579f80d0183b-kube-api-access-p7thg\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.121199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-config-data-custom\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.121391 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-config-data\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.121671 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-internal-tls-certs\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.121786 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-public-tls-certs\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.122087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-combined-ca-bundle\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.122165 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb9291c-b698-4aa7-b4c9-579f80d0183b-logs\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.224773 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-internal-tls-certs\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.224853 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-public-tls-certs\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.224981 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-combined-ca-bundle\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.225027 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb9291c-b698-4aa7-b4c9-579f80d0183b-logs\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.225118 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7thg\" (UniqueName: \"kubernetes.io/projected/adb9291c-b698-4aa7-b4c9-579f80d0183b-kube-api-access-p7thg\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.225146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-config-data-custom\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.225183 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-config-data\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.227344 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb9291c-b698-4aa7-b4c9-579f80d0183b-logs\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.232405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-public-tls-certs\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.233654 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-internal-tls-certs\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.236430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-config-data\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.243046 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-config-data-custom\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.245697 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb9291c-b698-4aa7-b4c9-579f80d0183b-combined-ca-bundle\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.252549 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7thg\" (UniqueName: \"kubernetes.io/projected/adb9291c-b698-4aa7-b4c9-579f80d0183b-kube-api-access-p7thg\") pod \"barbican-api-5f6784f8dd-pcqm7\" (UID: \"adb9291c-b698-4aa7-b4c9-579f80d0183b\") " pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.268442 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.391356 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" event={"ID":"6f5c8dd7-8982-4b05-9582-6fce29de5659","Type":"ContainerStarted","Data":"1dbf83de38df47538b7f0728c06d69d5ec139d286b30cfb94efd537436a1f84d"} Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.392720 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.420639 4727 generic.go:334] "Generic (PLEG): container finished" podID="6a7dc74a-65ab-4440-b4d0-33c102b7baeb" containerID="117c24f1629e191a6ad2f9a8448ced1e9a50d8ac8f7b8097da25da5915028c21" exitCode=0 Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.421752 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8s8pt" event={"ID":"6a7dc74a-65ab-4440-b4d0-33c102b7baeb","Type":"ContainerDied","Data":"117c24f1629e191a6ad2f9a8448ced1e9a50d8ac8f7b8097da25da5915028c21"} Dec 10 14:57:36 crc kubenswrapper[4727]: I1210 14:57:36.431554 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" podStartSLOduration=4.43152788 podStartE2EDuration="4.43152788s" podCreationTimestamp="2025-12-10 14:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:36.420634334 +0000 UTC m=+1560.615408886" watchObservedRunningTime="2025-12-10 14:57:36.43152788 +0000 UTC m=+1560.626302432" Dec 10 14:57:37 crc kubenswrapper[4727]: I1210 14:57:37.435576 4727 generic.go:334] "Generic (PLEG): container finished" podID="672a3a2e-19cb-4512-a908-c8d6f16753f7" containerID="8678a2d1d4114f78beffb7d2645739e471a7dcdab2fa6f00021e307baeffa308" exitCode=0 Dec 10 14:57:37 crc kubenswrapper[4727]: I1210 14:57:37.435745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-l9hd6" event={"ID":"672a3a2e-19cb-4512-a908-c8d6f16753f7","Type":"ContainerDied","Data":"8678a2d1d4114f78beffb7d2645739e471a7dcdab2fa6f00021e307baeffa308"} Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.105282 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.105750 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.128209 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.128301 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.212231 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.212293 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.213115 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.213355 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.484461 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.485188 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.485286 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 14:57:41 crc kubenswrapper[4727]: I1210 14:57:41.485367 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 14:57:43 crc kubenswrapper[4727]: I1210 14:57:43.175270 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:57:43 crc kubenswrapper[4727]: I1210 14:57:43.235027 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s8th4"] Dec 10 14:57:43 crc kubenswrapper[4727]: I1210 14:57:43.235289 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" podUID="d0c47fd2-9e71-493d-a5bd-d528703d9942" containerName="dnsmasq-dns" containerID="cri-o://f5a5d28917582a724ef719c36ecd0b07cd0dc9ffc9b0f6e54bf0f1f2aaff3c8e" gracePeriod=10 Dec 10 14:57:43 crc kubenswrapper[4727]: I1210 14:57:43.504413 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:57:43 crc kubenswrapper[4727]: I1210 14:57:43.504446 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:57:43 crc kubenswrapper[4727]: I1210 14:57:43.504383 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:57:43 crc kubenswrapper[4727]: I1210 14:57:43.504563 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:57:44 crc kubenswrapper[4727]: I1210 14:57:44.236157 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-778db78b58-dzbj2" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:57:44 crc kubenswrapper[4727]: I1210 14:57:44.961891 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:45 crc kubenswrapper[4727]: I1210 14:57:45.528360 4727 generic.go:334] "Generic (PLEG): container finished" podID="d0c47fd2-9e71-493d-a5bd-d528703d9942" containerID="f5a5d28917582a724ef719c36ecd0b07cd0dc9ffc9b0f6e54bf0f1f2aaff3c8e" exitCode=0 Dec 10 14:57:45 crc kubenswrapper[4727]: I1210 14:57:45.528420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" event={"ID":"d0c47fd2-9e71-493d-a5bd-d528703d9942","Type":"ContainerDied","Data":"f5a5d28917582a724ef719c36ecd0b07cd0dc9ffc9b0f6e54bf0f1f2aaff3c8e"} Dec 10 14:57:45 crc kubenswrapper[4727]: I1210 14:57:45.834193 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:57:47 crc kubenswrapper[4727]: I1210 14:57:47.815259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8s8pt" event={"ID":"6a7dc74a-65ab-4440-b4d0-33c102b7baeb","Type":"ContainerDied","Data":"8754daf97a17deac9ac170b6c2924b6a4090218879e52d936f2bff618459f8f9"} Dec 10 14:57:47 crc kubenswrapper[4727]: I1210 14:57:47.815607 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8754daf97a17deac9ac170b6c2924b6a4090218879e52d936f2bff618459f8f9" Dec 10 14:57:47 crc kubenswrapper[4727]: I1210 14:57:47.819314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-l9hd6" event={"ID":"672a3a2e-19cb-4512-a908-c8d6f16753f7","Type":"ContainerDied","Data":"e8e86c4527984486f107dea205078b1287cd2f69d04afe0746a0c6eb1efab590"} Dec 10 14:57:47 crc kubenswrapper[4727]: I1210 14:57:47.819355 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e86c4527984486f107dea205078b1287cd2f69d04afe0746a0c6eb1efab590" Dec 10 14:57:47 crc kubenswrapper[4727]: I1210 14:57:47.902373 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:57:47 crc kubenswrapper[4727]: I1210 14:57:47.903854 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.008667 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65nbx\" (UniqueName: \"kubernetes.io/projected/672a3a2e-19cb-4512-a908-c8d6f16753f7-kube-api-access-65nbx\") pod \"672a3a2e-19cb-4512-a908-c8d6f16753f7\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.008794 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-etc-machine-id\") pod \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.008838 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-config\") pod \"672a3a2e-19cb-4512-a908-c8d6f16753f7\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.008868 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6a7dc74a-65ab-4440-b4d0-33c102b7baeb" (UID: "6a7dc74a-65ab-4440-b4d0-33c102b7baeb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.008948 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-combined-ca-bundle\") pod \"672a3a2e-19cb-4512-a908-c8d6f16753f7\" (UID: \"672a3a2e-19cb-4512-a908-c8d6f16753f7\") " Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.009020 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-combined-ca-bundle\") pod \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.009063 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-db-sync-config-data\") pod \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.009141 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrxxz\" (UniqueName: \"kubernetes.io/projected/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-kube-api-access-jrxxz\") pod \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.009173 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-scripts\") pod \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.010025 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-config-data\") pod \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\" (UID: \"6a7dc74a-65ab-4440-b4d0-33c102b7baeb\") " Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.010546 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.016550 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6a7dc74a-65ab-4440-b4d0-33c102b7baeb" (UID: "6a7dc74a-65ab-4440-b4d0-33c102b7baeb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.017395 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-scripts" (OuterVolumeSpecName: "scripts") pod "6a7dc74a-65ab-4440-b4d0-33c102b7baeb" (UID: "6a7dc74a-65ab-4440-b4d0-33c102b7baeb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.018867 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/672a3a2e-19cb-4512-a908-c8d6f16753f7-kube-api-access-65nbx" (OuterVolumeSpecName: "kube-api-access-65nbx") pod "672a3a2e-19cb-4512-a908-c8d6f16753f7" (UID: "672a3a2e-19cb-4512-a908-c8d6f16753f7"). InnerVolumeSpecName "kube-api-access-65nbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.020350 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-kube-api-access-jrxxz" (OuterVolumeSpecName: "kube-api-access-jrxxz") pod "6a7dc74a-65ab-4440-b4d0-33c102b7baeb" (UID: "6a7dc74a-65ab-4440-b4d0-33c102b7baeb"). InnerVolumeSpecName "kube-api-access-jrxxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.046809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-config" (OuterVolumeSpecName: "config") pod "672a3a2e-19cb-4512-a908-c8d6f16753f7" (UID: "672a3a2e-19cb-4512-a908-c8d6f16753f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.062457 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "672a3a2e-19cb-4512-a908-c8d6f16753f7" (UID: "672a3a2e-19cb-4512-a908-c8d6f16753f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.067857 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a7dc74a-65ab-4440-b4d0-33c102b7baeb" (UID: "6a7dc74a-65ab-4440-b4d0-33c102b7baeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.111957 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.111997 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672a3a2e-19cb-4512-a908-c8d6f16753f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.112010 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.112019 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.112028 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrxxz\" (UniqueName: \"kubernetes.io/projected/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-kube-api-access-jrxxz\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.112037 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.112046 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65nbx\" (UniqueName: \"kubernetes.io/projected/672a3a2e-19cb-4512-a908-c8d6f16753f7-kube-api-access-65nbx\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.185192 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-config-data" (OuterVolumeSpecName: "config-data") pod "6a7dc74a-65ab-4440-b4d0-33c102b7baeb" (UID: "6a7dc74a-65ab-4440-b4d0-33c102b7baeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.206265 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.206440 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.214518 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7dc74a-65ab-4440-b4d0-33c102b7baeb-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.227498 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.227627 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.242621 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.397243 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.780259 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f6784f8dd-pcqm7"] Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.893892 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8s8pt" Dec 10 14:57:48 crc kubenswrapper[4727]: I1210 14:57:48.894203 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-l9hd6" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.228473 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wb96v"] Dec 10 14:57:49 crc kubenswrapper[4727]: E1210 14:57:49.230091 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7dc74a-65ab-4440-b4d0-33c102b7baeb" containerName="cinder-db-sync" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.230116 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7dc74a-65ab-4440-b4d0-33c102b7baeb" containerName="cinder-db-sync" Dec 10 14:57:49 crc kubenswrapper[4727]: E1210 14:57:49.230138 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="672a3a2e-19cb-4512-a908-c8d6f16753f7" containerName="neutron-db-sync" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.230151 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="672a3a2e-19cb-4512-a908-c8d6f16753f7" containerName="neutron-db-sync" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.230357 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="672a3a2e-19cb-4512-a908-c8d6f16753f7" containerName="neutron-db-sync" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.230376 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7dc74a-65ab-4440-b4d0-33c102b7baeb" containerName="cinder-db-sync" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.231654 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.269652 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wb96v"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.308003 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.312519 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.317264 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.317699 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.318108 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.320784 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vlk2f" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391143 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-svc\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b37eebb5-17ce-4765-a7d5-022abb215816-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391246 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb27p\" (UniqueName: \"kubernetes.io/projected/b37eebb5-17ce-4765-a7d5-022abb215816-kube-api-access-wb27p\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391289 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-config\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391314 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391396 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391436 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391461 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-scripts\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391488 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29n28\" (UniqueName: \"kubernetes.io/projected/3b348d71-4bc5-4820-8754-6816258a3eef-kube-api-access-29n28\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391566 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391591 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.391615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.436806 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.493513 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.493603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.493627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-scripts\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.493652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29n28\" (UniqueName: \"kubernetes.io/projected/3b348d71-4bc5-4820-8754-6816258a3eef-kube-api-access-29n28\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.493718 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.493736 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.493756 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.493915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-svc\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.494008 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b37eebb5-17ce-4765-a7d5-022abb215816-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.494054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb27p\" (UniqueName: \"kubernetes.io/projected/b37eebb5-17ce-4765-a7d5-022abb215816-kube-api-access-wb27p\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.494124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-config\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.494150 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.495407 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.496183 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-svc\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.496250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b37eebb5-17ce-4765-a7d5-022abb215816-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.496695 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.500687 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.501141 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-config\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.507470 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-scripts\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.516793 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.519383 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.524320 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.535803 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29n28\" (UniqueName: \"kubernetes.io/projected/3b348d71-4bc5-4820-8754-6816258a3eef-kube-api-access-29n28\") pod \"dnsmasq-dns-85ff748b95-wb96v\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.546484 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb27p\" (UniqueName: \"kubernetes.io/projected/b37eebb5-17ce-4765-a7d5-022abb215816-kube-api-access-wb27p\") pod \"cinder-scheduler-0\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.585243 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.611849 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f9b8bf876-ww2gz"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.620559 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.635065 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.635302 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.635444 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sf5vj" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.635632 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.649560 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wb96v"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.690574 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.693263 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f9b8bf876-ww2gz"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.701459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-ovndb-tls-certs\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.701572 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-httpd-config\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.701602 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh9kk\" (UniqueName: \"kubernetes.io/projected/0acfba51-9dd2-48cb-b22d-7a59dff45f74-kube-api-access-jh9kk\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.701746 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-config\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.701839 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-combined-ca-bundle\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.729559 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lr6qp"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.732019 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.746825 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lr6qp"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.801581 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.803370 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.803615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hqsm\" (UniqueName: \"kubernetes.io/projected/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-kube-api-access-2hqsm\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.803672 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-combined-ca-bundle\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.803699 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.803740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-config\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.803813 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-ovndb-tls-certs\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.803885 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.805575 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.805613 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-httpd-config\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.806456 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh9kk\" (UniqueName: \"kubernetes.io/projected/0acfba51-9dd2-48cb-b22d-7a59dff45f74-kube-api-access-jh9kk\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.806585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.806664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-config\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.809546 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-combined-ca-bundle\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.812420 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.812875 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-httpd-config\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.826295 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-config\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.841924 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh9kk\" (UniqueName: \"kubernetes.io/projected/0acfba51-9dd2-48cb-b22d-7a59dff45f74-kube-api-access-jh9kk\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.842549 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-ovndb-tls-certs\") pod \"neutron-7f9b8bf876-ww2gz\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.844977 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.909164 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-scripts\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.909226 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.909253 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fdc6346-d5eb-4696-ac1a-a05f59542183-logs\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.909308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.909348 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.909420 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.909456 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2h5\" (UniqueName: \"kubernetes.io/projected/6fdc6346-d5eb-4696-ac1a-a05f59542183-kube-api-access-nn2h5\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.910503 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.910702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.910825 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hqsm\" (UniqueName: \"kubernetes.io/projected/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-kube-api-access-2hqsm\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.911004 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.911156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.911166 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.911999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.915251 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-config\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.915371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fdc6346-d5eb-4696-ac1a-a05f59542183-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.915494 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data-custom\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.916319 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-config\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.934224 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hqsm\" (UniqueName: \"kubernetes.io/projected/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-kube-api-access-2hqsm\") pod \"dnsmasq-dns-5c9776ccc5-lr6qp\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:49 crc kubenswrapper[4727]: I1210 14:57:49.959726 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:49 crc kubenswrapper[4727]: W1210 14:57:49.963875 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb9291c_b698_4aa7_b4c9_579f80d0183b.slice/crio-fb05be4b9414b8b70324622d3dbd1cc484f45422c2e8195af8f77c703407955f WatchSource:0}: Error finding container fb05be4b9414b8b70324622d3dbd1cc484f45422c2e8195af8f77c703407955f: Status 404 returned error can't find the container with id fb05be4b9414b8b70324622d3dbd1cc484f45422c2e8195af8f77c703407955f Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.018256 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fdc6346-d5eb-4696-ac1a-a05f59542183-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.018358 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data-custom\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.018407 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fdc6346-d5eb-4696-ac1a-a05f59542183-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.018489 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-scripts\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.018538 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.018567 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fdc6346-d5eb-4696-ac1a-a05f59542183-logs\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.018658 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn2h5\" (UniqueName: \"kubernetes.io/projected/6fdc6346-d5eb-4696-ac1a-a05f59542183-kube-api-access-nn2h5\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.018779 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.019159 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fdc6346-d5eb-4696-ac1a-a05f59542183-logs\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.022689 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data-custom\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.023584 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.025175 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.038471 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-scripts\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.046214 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn2h5\" (UniqueName: \"kubernetes.io/projected/6fdc6346-d5eb-4696-ac1a-a05f59542183-kube-api-access-nn2h5\") pod \"cinder-api-0\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.062216 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.164067 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.174859 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.346737 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-swift-storage-0\") pod \"d0c47fd2-9e71-493d-a5bd-d528703d9942\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.347083 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-nb\") pod \"d0c47fd2-9e71-493d-a5bd-d528703d9942\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.347145 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-config\") pod \"d0c47fd2-9e71-493d-a5bd-d528703d9942\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.347200 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pcdm\" (UniqueName: \"kubernetes.io/projected/d0c47fd2-9e71-493d-a5bd-d528703d9942-kube-api-access-6pcdm\") pod \"d0c47fd2-9e71-493d-a5bd-d528703d9942\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.347228 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-svc\") pod \"d0c47fd2-9e71-493d-a5bd-d528703d9942\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.347310 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-sb\") pod \"d0c47fd2-9e71-493d-a5bd-d528703d9942\" (UID: \"d0c47fd2-9e71-493d-a5bd-d528703d9942\") " Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.366659 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0c47fd2-9e71-493d-a5bd-d528703d9942-kube-api-access-6pcdm" (OuterVolumeSpecName: "kube-api-access-6pcdm") pod "d0c47fd2-9e71-493d-a5bd-d528703d9942" (UID: "d0c47fd2-9e71-493d-a5bd-d528703d9942"). InnerVolumeSpecName "kube-api-access-6pcdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.451594 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pcdm\" (UniqueName: \"kubernetes.io/projected/d0c47fd2-9e71-493d-a5bd-d528703d9942-kube-api-access-6pcdm\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.631774 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0c47fd2-9e71-493d-a5bd-d528703d9942" (UID: "d0c47fd2-9e71-493d-a5bd-d528703d9942"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.637236 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d0c47fd2-9e71-493d-a5bd-d528703d9942" (UID: "d0c47fd2-9e71-493d-a5bd-d528703d9942"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.639158 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-config" (OuterVolumeSpecName: "config") pod "d0c47fd2-9e71-493d-a5bd-d528703d9942" (UID: "d0c47fd2-9e71-493d-a5bd-d528703d9942"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.641267 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d0c47fd2-9e71-493d-a5bd-d528703d9942" (UID: "d0c47fd2-9e71-493d-a5bd-d528703d9942"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.656257 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.656286 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.656296 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.656305 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.664215 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0c47fd2-9e71-493d-a5bd-d528703d9942" (UID: "d0c47fd2-9e71-493d-a5bd-d528703d9942"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.757916 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0c47fd2-9e71-493d-a5bd-d528703d9942-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.864437 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wb96v"] Dec 10 14:57:50 crc kubenswrapper[4727]: W1210 14:57:50.903400 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37eebb5_17ce_4765_a7d5_022abb215816.slice/crio-f7f35d46d358f40384329d3c3ba534736bb4c7d5a02cf4f8596fe54badc8c0d0 WatchSource:0}: Error finding container f7f35d46d358f40384329d3c3ba534736bb4c7d5a02cf4f8596fe54badc8c0d0: Status 404 returned error can't find the container with id f7f35d46d358f40384329d3c3ba534736bb4c7d5a02cf4f8596fe54badc8c0d0 Dec 10 14:57:50 crc kubenswrapper[4727]: I1210 14:57:50.950787 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:57:50 crc kubenswrapper[4727]: E1210 14:57:50.999943 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 10 14:57:51 crc kubenswrapper[4727]: E1210 14:57:51.000531 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fbht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(353e8adf-9611-450e-ac07-270a2550ee0e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:57:51 crc kubenswrapper[4727]: E1210 14:57:51.003555 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.031021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-wb96v" event={"ID":"3b348d71-4bc5-4820-8754-6816258a3eef","Type":"ContainerStarted","Data":"4e0dd8f9e57e40aafe1ee19b112051127a18d563f13013e90e815270ee663f38"} Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.056303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b37eebb5-17ce-4765-a7d5-022abb215816","Type":"ContainerStarted","Data":"f7f35d46d358f40384329d3c3ba534736bb4c7d5a02cf4f8596fe54badc8c0d0"} Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.061318 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c6cd97dcf-twztd" event={"ID":"573bf6bf-ba45-4eaf-a331-e11c2696a1ab","Type":"ContainerStarted","Data":"1ada29ba3198fe5cc61d94f5762fbff453176a6c1fb834821ee75acb6be54415"} Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.086223 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f6784f8dd-pcqm7" event={"ID":"adb9291c-b698-4aa7-b4c9-579f80d0183b","Type":"ContainerStarted","Data":"8a8dec571e62f63f28f42df35c7629ffdf80e9f67497a7292c8dff3f5c7a0ff8"} Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.086269 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f6784f8dd-pcqm7" event={"ID":"adb9291c-b698-4aa7-b4c9-579f80d0183b","Type":"ContainerStarted","Data":"fb05be4b9414b8b70324622d3dbd1cc484f45422c2e8195af8f77c703407955f"} Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.096875 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" event={"ID":"a20e7837-8316-4dd1-91d5-60d2ea604213","Type":"ContainerStarted","Data":"6e357bd2b996e340bb9bb990e6335d37d48bef701a6521cc5a4130c76c1735b6"} Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.102875 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" event={"ID":"d0c47fd2-9e71-493d-a5bd-d528703d9942","Type":"ContainerDied","Data":"4dd245271025db29bcfb22dc5939e83c5718afad57f84a263100dd60504737f6"} Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.103105 4727 scope.go:117] "RemoveContainer" containerID="f5a5d28917582a724ef719c36ecd0b07cd0dc9ffc9b0f6e54bf0f1f2aaff3c8e" Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.103157 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.229772 4727 scope.go:117] "RemoveContainer" containerID="355ad17dc465b57c7886da3df982e772cfaa07bb80be416de606b771235b93a9" Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.238096 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s8th4"] Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.334967 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s8th4"] Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.353234 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lr6qp"] Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.532419 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f9b8bf876-ww2gz"] Dec 10 14:57:51 crc kubenswrapper[4727]: I1210 14:57:51.565783 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:57:51 crc kubenswrapper[4727]: W1210 14:57:51.604051 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fdc6346_d5eb_4696_ac1a_a05f59542183.slice/crio-5603592da6941440d2ca283fa32f9878d1ac2ac3dff28a35046ffed0243e2405 WatchSource:0}: Error finding container 5603592da6941440d2ca283fa32f9878d1ac2ac3dff28a35046ffed0243e2405: Status 404 returned error can't find the container with id 5603592da6941440d2ca283fa32f9878d1ac2ac3dff28a35046ffed0243e2405 Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.127447 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.144116 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6fdc6346-d5eb-4696-ac1a-a05f59542183","Type":"ContainerStarted","Data":"5603592da6941440d2ca283fa32f9878d1ac2ac3dff28a35046ffed0243e2405"} Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.146819 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f6784f8dd-pcqm7" event={"ID":"adb9291c-b698-4aa7-b4c9-579f80d0183b","Type":"ContainerStarted","Data":"ec9c3aaaae5eeb1d7efe9edb683ec018e6fcbe28bf3662d2693c4c5daac5c0e5"} Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.148400 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.148423 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.150882 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" event={"ID":"a20e7837-8316-4dd1-91d5-60d2ea604213","Type":"ContainerStarted","Data":"4cefa58effead56660d3ba79ac6f59a6d1a87720634c46df6ad4fd7f451f2d23"} Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.163860 4727 generic.go:334] "Generic (PLEG): container finished" podID="3b348d71-4bc5-4820-8754-6816258a3eef" containerID="7fc11b80b3ef98b1913599ad2487bd6a2da4452faf385b9d8a237c5e74631efe" exitCode=0 Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.163946 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-wb96v" event={"ID":"3b348d71-4bc5-4820-8754-6816258a3eef","Type":"ContainerDied","Data":"7fc11b80b3ef98b1913599ad2487bd6a2da4452faf385b9d8a237c5e74631efe"} Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.187021 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f6784f8dd-pcqm7" podStartSLOduration=17.186970419 podStartE2EDuration="17.186970419s" podCreationTimestamp="2025-12-10 14:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:52.17251374 +0000 UTC m=+1576.367288292" watchObservedRunningTime="2025-12-10 14:57:52.186970419 +0000 UTC m=+1576.381744971" Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.198512 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" event={"ID":"5fcd0f0a-6bef-42d5-a6be-7de638221c2d","Type":"ContainerStarted","Data":"c48bbe9306787ff6d1c66afdab3d41634499974ffd7ada9f8ba57b35bb5c2678"} Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.198559 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" event={"ID":"5fcd0f0a-6bef-42d5-a6be-7de638221c2d","Type":"ContainerStarted","Data":"cec6c4c781e5a34774f96fda57ae0c40e22baa63cb2ea296903311a7fd16b7fa"} Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.236329 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f9b8bf876-ww2gz" event={"ID":"0acfba51-9dd2-48cb-b22d-7a59dff45f74","Type":"ContainerStarted","Data":"34f77fd14ac636126e024f9473bcfad476b9aae83861361de628e93293af1bd1"} Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.245860 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-cf8d46dd8-mljgg" podStartSLOduration=6.581534608 podStartE2EDuration="20.245834766s" podCreationTimestamp="2025-12-10 14:57:32 +0000 UTC" firstStartedPulling="2025-12-10 14:57:34.097764576 +0000 UTC m=+1558.292539118" lastFinishedPulling="2025-12-10 14:57:47.762064734 +0000 UTC m=+1571.956839276" observedRunningTime="2025-12-10 14:57:52.232476558 +0000 UTC m=+1576.427251110" watchObservedRunningTime="2025-12-10 14:57:52.245834766 +0000 UTC m=+1576.440609308" Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.271746 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" containerName="ceilometer-notification-agent" containerID="cri-o://82de7cebbfb99eee77179edc0d130a9a3d31f3a941f3bac457a87d1ae0f6795d" gracePeriod=30 Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.273171 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c6cd97dcf-twztd" event={"ID":"573bf6bf-ba45-4eaf-a331-e11c2696a1ab","Type":"ContainerStarted","Data":"065c2a85489e2992bcba12daa0cdb72241c17ba8759c6a1f764be2de3033013e"} Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.273604 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" containerName="sg-core" containerID="cri-o://e6b8adc746b2fe39675d6e3eef97a980b97353ea2505b234937ee868e4901699" gracePeriod=30 Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.395277 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c6cd97dcf-twztd" podStartSLOduration=4.510078979 podStartE2EDuration="20.395163185s" podCreationTimestamp="2025-12-10 14:57:32 +0000 UTC" firstStartedPulling="2025-12-10 14:57:34.062417331 +0000 UTC m=+1558.257191873" lastFinishedPulling="2025-12-10 14:57:49.947501537 +0000 UTC m=+1574.142276079" observedRunningTime="2025-12-10 14:57:52.370254453 +0000 UTC m=+1576.565028995" watchObservedRunningTime="2025-12-10 14:57:52.395163185 +0000 UTC m=+1576.589937747" Dec 10 14:57:52 crc kubenswrapper[4727]: I1210 14:57:52.735631 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0c47fd2-9e71-493d-a5bd-d528703d9942" path="/var/lib/kubelet/pods/d0c47fd2-9e71-493d-a5bd-d528703d9942/volumes" Dec 10 14:57:53 crc kubenswrapper[4727]: E1210 14:57:53.023662 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353e8adf_9611_450e_ac07_270a2550ee0e.slice/crio-e6b8adc746b2fe39675d6e3eef97a980b97353ea2505b234937ee868e4901699.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353e8adf_9611_450e_ac07_270a2550ee0e.slice/crio-conmon-e6b8adc746b2fe39675d6e3eef97a980b97353ea2505b234937ee868e4901699.scope\": RecentStats: unable to find data in memory cache]" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.116846 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-s8th4" podUID="d0c47fd2-9e71-493d-a5bd-d528703d9942" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.161:5353: i/o timeout" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.285282 4727 generic.go:334] "Generic (PLEG): container finished" podID="353e8adf-9611-450e-ac07-270a2550ee0e" containerID="e6b8adc746b2fe39675d6e3eef97a980b97353ea2505b234937ee868e4901699" exitCode=2 Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.285346 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353e8adf-9611-450e-ac07-270a2550ee0e","Type":"ContainerDied","Data":"e6b8adc746b2fe39675d6e3eef97a980b97353ea2505b234937ee868e4901699"} Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.288012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f9b8bf876-ww2gz" event={"ID":"0acfba51-9dd2-48cb-b22d-7a59dff45f74","Type":"ContainerStarted","Data":"74aec552eb303bb4a2cc48d7317680363b863600019a3b48bb563ae33d64efc7"} Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.290227 4727 generic.go:334] "Generic (PLEG): container finished" podID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerID="c48bbe9306787ff6d1c66afdab3d41634499974ffd7ada9f8ba57b35bb5c2678" exitCode=0 Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.290369 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" event={"ID":"5fcd0f0a-6bef-42d5-a6be-7de638221c2d","Type":"ContainerDied","Data":"c48bbe9306787ff6d1c66afdab3d41634499974ffd7ada9f8ba57b35bb5c2678"} Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.496617 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.619871 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-svc\") pod \"3b348d71-4bc5-4820-8754-6816258a3eef\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.619935 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-config\") pod \"3b348d71-4bc5-4820-8754-6816258a3eef\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.619959 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-swift-storage-0\") pod \"3b348d71-4bc5-4820-8754-6816258a3eef\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.620013 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-sb\") pod \"3b348d71-4bc5-4820-8754-6816258a3eef\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.620192 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29n28\" (UniqueName: \"kubernetes.io/projected/3b348d71-4bc5-4820-8754-6816258a3eef-kube-api-access-29n28\") pod \"3b348d71-4bc5-4820-8754-6816258a3eef\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.620227 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-nb\") pod \"3b348d71-4bc5-4820-8754-6816258a3eef\" (UID: \"3b348d71-4bc5-4820-8754-6816258a3eef\") " Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.629803 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b348d71-4bc5-4820-8754-6816258a3eef-kube-api-access-29n28" (OuterVolumeSpecName: "kube-api-access-29n28") pod "3b348d71-4bc5-4820-8754-6816258a3eef" (UID: "3b348d71-4bc5-4820-8754-6816258a3eef"). InnerVolumeSpecName "kube-api-access-29n28". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.652592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-config" (OuterVolumeSpecName: "config") pod "3b348d71-4bc5-4820-8754-6816258a3eef" (UID: "3b348d71-4bc5-4820-8754-6816258a3eef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.654240 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3b348d71-4bc5-4820-8754-6816258a3eef" (UID: "3b348d71-4bc5-4820-8754-6816258a3eef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.655782 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3b348d71-4bc5-4820-8754-6816258a3eef" (UID: "3b348d71-4bc5-4820-8754-6816258a3eef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.656675 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3b348d71-4bc5-4820-8754-6816258a3eef" (UID: "3b348d71-4bc5-4820-8754-6816258a3eef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.679662 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3b348d71-4bc5-4820-8754-6816258a3eef" (UID: "3b348d71-4bc5-4820-8754-6816258a3eef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.824238 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.824278 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.824289 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.824298 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.824308 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b348d71-4bc5-4820-8754-6816258a3eef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:53 crc kubenswrapper[4727]: I1210 14:57:53.824318 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29n28\" (UniqueName: \"kubernetes.io/projected/3b348d71-4bc5-4820-8754-6816258a3eef-kube-api-access-29n28\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.306361 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-wb96v" event={"ID":"3b348d71-4bc5-4820-8754-6816258a3eef","Type":"ContainerDied","Data":"4e0dd8f9e57e40aafe1ee19b112051127a18d563f13013e90e815270ee663f38"} Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.306411 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wb96v" Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.306434 4727 scope.go:117] "RemoveContainer" containerID="7fc11b80b3ef98b1913599ad2487bd6a2da4452faf385b9d8a237c5e74631efe" Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.312655 4727 generic.go:334] "Generic (PLEG): container finished" podID="353e8adf-9611-450e-ac07-270a2550ee0e" containerID="82de7cebbfb99eee77179edc0d130a9a3d31f3a941f3bac457a87d1ae0f6795d" exitCode=0 Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.312789 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353e8adf-9611-450e-ac07-270a2550ee0e","Type":"ContainerDied","Data":"82de7cebbfb99eee77179edc0d130a9a3d31f3a941f3bac457a87d1ae0f6795d"} Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.320575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f9b8bf876-ww2gz" event={"ID":"0acfba51-9dd2-48cb-b22d-7a59dff45f74","Type":"ContainerStarted","Data":"47fb4b5341867b21a4284760d2829c377b23bf60434a949563fe300973a2a490"} Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.320649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6fdc6346-d5eb-4696-ac1a-a05f59542183","Type":"ContainerStarted","Data":"b94c2bc06513dfc49b4c2a563fe1ba44c624439f0315825eff33ca747ccb02ea"} Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.466011 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wb96v"] Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.486919 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wb96v"] Dec 10 14:57:54 crc kubenswrapper[4727]: I1210 14:57:54.624696 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b348d71-4bc5-4820-8754-6816258a3eef" path="/var/lib/kubelet/pods/3b348d71-4bc5-4820-8754-6816258a3eef/volumes" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.144721 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.270377 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-run-httpd\") pod \"353e8adf-9611-450e-ac07-270a2550ee0e\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.270833 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-log-httpd\") pod \"353e8adf-9611-450e-ac07-270a2550ee0e\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.270900 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-config-data\") pod \"353e8adf-9611-450e-ac07-270a2550ee0e\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.271016 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fbht\" (UniqueName: \"kubernetes.io/projected/353e8adf-9611-450e-ac07-270a2550ee0e-kube-api-access-7fbht\") pod \"353e8adf-9611-450e-ac07-270a2550ee0e\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.271095 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-combined-ca-bundle\") pod \"353e8adf-9611-450e-ac07-270a2550ee0e\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.271267 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "353e8adf-9611-450e-ac07-270a2550ee0e" (UID: "353e8adf-9611-450e-ac07-270a2550ee0e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.271285 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "353e8adf-9611-450e-ac07-270a2550ee0e" (UID: "353e8adf-9611-450e-ac07-270a2550ee0e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.271390 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-sg-core-conf-yaml\") pod \"353e8adf-9611-450e-ac07-270a2550ee0e\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.273516 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-scripts\") pod \"353e8adf-9611-450e-ac07-270a2550ee0e\" (UID: \"353e8adf-9611-450e-ac07-270a2550ee0e\") " Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.276371 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.276405 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353e8adf-9611-450e-ac07-270a2550ee0e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.277201 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353e8adf-9611-450e-ac07-270a2550ee0e-kube-api-access-7fbht" (OuterVolumeSpecName: "kube-api-access-7fbht") pod "353e8adf-9611-450e-ac07-270a2550ee0e" (UID: "353e8adf-9611-450e-ac07-270a2550ee0e"). InnerVolumeSpecName "kube-api-access-7fbht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.282833 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-scripts" (OuterVolumeSpecName: "scripts") pod "353e8adf-9611-450e-ac07-270a2550ee0e" (UID: "353e8adf-9611-450e-ac07-270a2550ee0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.306238 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "353e8adf-9611-450e-ac07-270a2550ee0e" (UID: "353e8adf-9611-450e-ac07-270a2550ee0e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.318252 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-config-data" (OuterVolumeSpecName: "config-data") pod "353e8adf-9611-450e-ac07-270a2550ee0e" (UID: "353e8adf-9611-450e-ac07-270a2550ee0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.320750 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "353e8adf-9611-450e-ac07-270a2550ee0e" (UID: "353e8adf-9611-450e-ac07-270a2550ee0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.336451 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353e8adf-9611-450e-ac07-270a2550ee0e","Type":"ContainerDied","Data":"a070dc413d79ba3cea18d4b0fa5ff63e7a7bdaeca1560387505c4e2afa9e88c7"} Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.336512 4727 scope.go:117] "RemoveContainer" containerID="e6b8adc746b2fe39675d6e3eef97a980b97353ea2505b234937ee868e4901699" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.336642 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.369721 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" event={"ID":"5fcd0f0a-6bef-42d5-a6be-7de638221c2d","Type":"ContainerStarted","Data":"abcba7042fd38da9aa60a498a6143a2b721ea6adfdc6cebc70454788e93f9427"} Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.369787 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.369896 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.380406 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.380446 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.380459 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.380469 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353e8adf-9611-450e-ac07-270a2550ee0e-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.380480 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fbht\" (UniqueName: \"kubernetes.io/projected/353e8adf-9611-450e-ac07-270a2550ee0e-kube-api-access-7fbht\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.393849 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f9b8bf876-ww2gz" podStartSLOduration=6.393825938 podStartE2EDuration="6.393825938s" podCreationTimestamp="2025-12-10 14:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:55.390485831 +0000 UTC m=+1579.585260373" watchObservedRunningTime="2025-12-10 14:57:55.393825938 +0000 UTC m=+1579.588600480" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.424445 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" podStartSLOduration=6.424423655 podStartE2EDuration="6.424423655s" podCreationTimestamp="2025-12-10 14:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:55.417447092 +0000 UTC m=+1579.612221634" watchObservedRunningTime="2025-12-10 14:57:55.424423655 +0000 UTC m=+1579.619198187" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.445555 4727 scope.go:117] "RemoveContainer" containerID="82de7cebbfb99eee77179edc0d130a9a3d31f3a941f3bac457a87d1ae0f6795d" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.538283 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.556750 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.573749 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:57:55 crc kubenswrapper[4727]: E1210 14:57:55.574318 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b348d71-4bc5-4820-8754-6816258a3eef" containerName="init" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.574342 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b348d71-4bc5-4820-8754-6816258a3eef" containerName="init" Dec 10 14:57:55 crc kubenswrapper[4727]: E1210 14:57:55.574358 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" containerName="ceilometer-notification-agent" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.574365 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" containerName="ceilometer-notification-agent" Dec 10 14:57:55 crc kubenswrapper[4727]: E1210 14:57:55.574377 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" containerName="sg-core" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.574385 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" containerName="sg-core" Dec 10 14:57:55 crc kubenswrapper[4727]: E1210 14:57:55.574415 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c47fd2-9e71-493d-a5bd-d528703d9942" containerName="init" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.574422 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c47fd2-9e71-493d-a5bd-d528703d9942" containerName="init" Dec 10 14:57:55 crc kubenswrapper[4727]: E1210 14:57:55.574440 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c47fd2-9e71-493d-a5bd-d528703d9942" containerName="dnsmasq-dns" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.574447 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c47fd2-9e71-493d-a5bd-d528703d9942" containerName="dnsmasq-dns" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.574701 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0c47fd2-9e71-493d-a5bd-d528703d9942" containerName="dnsmasq-dns" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.574728 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" containerName="sg-core" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.574748 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" containerName="ceilometer-notification-agent" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.574763 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b348d71-4bc5-4820-8754-6816258a3eef" containerName="init" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.577927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.581234 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.581537 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.609877 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.693328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pp69\" (UniqueName: \"kubernetes.io/projected/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-kube-api-access-5pp69\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.694053 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.694098 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.694685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-run-httpd\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.694761 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-scripts\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.694946 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-config-data\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.694991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-log-httpd\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.797292 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pp69\" (UniqueName: \"kubernetes.io/projected/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-kube-api-access-5pp69\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.797437 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.797503 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.798592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-run-httpd\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.798663 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-scripts\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.799224 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-config-data\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.799284 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-log-httpd\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.799297 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-run-httpd\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.799738 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-log-httpd\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.802060 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.802133 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-scripts\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.802884 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.803199 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-config-data\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.821787 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pp69\" (UniqueName: \"kubernetes.io/projected/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-kube-api-access-5pp69\") pod \"ceilometer-0\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " pod="openstack/ceilometer-0" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.849674 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78f5bcc65c-vgwfj"] Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.852293 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.854745 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.856266 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.891750 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78f5bcc65c-vgwfj"] Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.913447 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-internal-tls-certs\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.915624 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-httpd-config\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.915683 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-config\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.915924 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-ovndb-tls-certs\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.915991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-public-tls-certs\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.916038 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-combined-ca-bundle\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.916076 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjqrb\" (UniqueName: \"kubernetes.io/projected/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-kube-api-access-pjqrb\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:55 crc kubenswrapper[4727]: I1210 14:57:55.935293 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.019799 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-httpd-config\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.020392 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-config\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.021505 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-ovndb-tls-certs\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.021562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-public-tls-certs\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.021597 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-combined-ca-bundle\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.021628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjqrb\" (UniqueName: \"kubernetes.io/projected/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-kube-api-access-pjqrb\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.021725 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-internal-tls-certs\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.027046 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-internal-tls-certs\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.029414 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-config\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.035758 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-httpd-config\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.036030 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-ovndb-tls-certs\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.040736 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-public-tls-certs\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.040891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-combined-ca-bundle\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.043047 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjqrb\" (UniqueName: \"kubernetes.io/projected/b4032a4d-aa42-4515-a09c-6647e6d0b7d5-kube-api-access-pjqrb\") pod \"neutron-78f5bcc65c-vgwfj\" (UID: \"b4032a4d-aa42-4515-a09c-6647e6d0b7d5\") " pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.237997 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.413153 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6fdc6346-d5eb-4696-ac1a-a05f59542183","Type":"ContainerStarted","Data":"175ccc193ebb6d1c6b2006897b5871efd56a5860ed02a65994fbe938dd2b0898"} Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.413311 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerName="cinder-api-log" containerID="cri-o://b94c2bc06513dfc49b4c2a563fe1ba44c624439f0315825eff33ca747ccb02ea" gracePeriod=30 Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.413509 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.413607 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerName="cinder-api" containerID="cri-o://175ccc193ebb6d1c6b2006897b5871efd56a5860ed02a65994fbe938dd2b0898" gracePeriod=30 Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.441513 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b37eebb5-17ce-4765-a7d5-022abb215816","Type":"ContainerStarted","Data":"d6719c22672611bd03fb74756eb50fa06144a8269f3aa6ed950f98b154dfb899"} Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.467747 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.467722434 podStartE2EDuration="7.467722434s" podCreationTimestamp="2025-12-10 14:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:56.450212326 +0000 UTC m=+1580.644986868" watchObservedRunningTime="2025-12-10 14:57:56.467722434 +0000 UTC m=+1580.662496976" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.751363 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353e8adf-9611-450e-ac07-270a2550ee0e" path="/var/lib/kubelet/pods/353e8adf-9611-450e-ac07-270a2550ee0e/volumes" Dec 10 14:57:56 crc kubenswrapper[4727]: I1210 14:57:56.804823 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.283421 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78f5bcc65c-vgwfj"] Dec 10 14:57:57 crc kubenswrapper[4727]: W1210 14:57:57.379188 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4032a4d_aa42_4515_a09c_6647e6d0b7d5.slice/crio-76a26b4b8dd71a4b51d62203e1bcfcad5293107f81bad905982a80e33f0c2c9c WatchSource:0}: Error finding container 76a26b4b8dd71a4b51d62203e1bcfcad5293107f81bad905982a80e33f0c2c9c: Status 404 returned error can't find the container with id 76a26b4b8dd71a4b51d62203e1bcfcad5293107f81bad905982a80e33f0c2c9c Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.475471 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78f5bcc65c-vgwfj" event={"ID":"b4032a4d-aa42-4515-a09c-6647e6d0b7d5","Type":"ContainerStarted","Data":"76a26b4b8dd71a4b51d62203e1bcfcad5293107f81bad905982a80e33f0c2c9c"} Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.505320 4727 generic.go:334] "Generic (PLEG): container finished" podID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerID="175ccc193ebb6d1c6b2006897b5871efd56a5860ed02a65994fbe938dd2b0898" exitCode=0 Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.505370 4727 generic.go:334] "Generic (PLEG): container finished" podID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerID="b94c2bc06513dfc49b4c2a563fe1ba44c624439f0315825eff33ca747ccb02ea" exitCode=143 Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.505506 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6fdc6346-d5eb-4696-ac1a-a05f59542183","Type":"ContainerDied","Data":"175ccc193ebb6d1c6b2006897b5871efd56a5860ed02a65994fbe938dd2b0898"} Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.505541 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6fdc6346-d5eb-4696-ac1a-a05f59542183","Type":"ContainerDied","Data":"b94c2bc06513dfc49b4c2a563fe1ba44c624439f0315825eff33ca747ccb02ea"} Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.509486 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerStarted","Data":"662afca0051167d9299bbf046e00b7d0282f5186333d3f76c9e63509f8c89803"} Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.706683 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.752291 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data\") pod \"6fdc6346-d5eb-4696-ac1a-a05f59542183\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.752654 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data-custom\") pod \"6fdc6346-d5eb-4696-ac1a-a05f59542183\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.752834 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn2h5\" (UniqueName: \"kubernetes.io/projected/6fdc6346-d5eb-4696-ac1a-a05f59542183-kube-api-access-nn2h5\") pod \"6fdc6346-d5eb-4696-ac1a-a05f59542183\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.753043 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fdc6346-d5eb-4696-ac1a-a05f59542183-etc-machine-id\") pod \"6fdc6346-d5eb-4696-ac1a-a05f59542183\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.753156 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-combined-ca-bundle\") pod \"6fdc6346-d5eb-4696-ac1a-a05f59542183\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.753266 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-scripts\") pod \"6fdc6346-d5eb-4696-ac1a-a05f59542183\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.753413 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fdc6346-d5eb-4696-ac1a-a05f59542183-logs\") pod \"6fdc6346-d5eb-4696-ac1a-a05f59542183\" (UID: \"6fdc6346-d5eb-4696-ac1a-a05f59542183\") " Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.754830 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fdc6346-d5eb-4696-ac1a-a05f59542183-logs" (OuterVolumeSpecName: "logs") pod "6fdc6346-d5eb-4696-ac1a-a05f59542183" (UID: "6fdc6346-d5eb-4696-ac1a-a05f59542183"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.755060 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fdc6346-d5eb-4696-ac1a-a05f59542183-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6fdc6346-d5eb-4696-ac1a-a05f59542183" (UID: "6fdc6346-d5eb-4696-ac1a-a05f59542183"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.760327 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdc6346-d5eb-4696-ac1a-a05f59542183-kube-api-access-nn2h5" (OuterVolumeSpecName: "kube-api-access-nn2h5") pod "6fdc6346-d5eb-4696-ac1a-a05f59542183" (UID: "6fdc6346-d5eb-4696-ac1a-a05f59542183"). InnerVolumeSpecName "kube-api-access-nn2h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.769254 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-scripts" (OuterVolumeSpecName: "scripts") pod "6fdc6346-d5eb-4696-ac1a-a05f59542183" (UID: "6fdc6346-d5eb-4696-ac1a-a05f59542183"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.795705 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6fdc6346-d5eb-4696-ac1a-a05f59542183" (UID: "6fdc6346-d5eb-4696-ac1a-a05f59542183"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.845505 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fdc6346-d5eb-4696-ac1a-a05f59542183" (UID: "6fdc6346-d5eb-4696-ac1a-a05f59542183"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.856637 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.856672 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn2h5\" (UniqueName: \"kubernetes.io/projected/6fdc6346-d5eb-4696-ac1a-a05f59542183-kube-api-access-nn2h5\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.856795 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fdc6346-d5eb-4696-ac1a-a05f59542183-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.856816 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.856829 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.856840 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fdc6346-d5eb-4696-ac1a-a05f59542183-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:57 crc kubenswrapper[4727]: I1210 14:57:57.972453 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data" (OuterVolumeSpecName: "config-data") pod "6fdc6346-d5eb-4696-ac1a-a05f59542183" (UID: "6fdc6346-d5eb-4696-ac1a-a05f59542183"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.061107 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fdc6346-d5eb-4696-ac1a-a05f59542183-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.548465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78f5bcc65c-vgwfj" event={"ID":"b4032a4d-aa42-4515-a09c-6647e6d0b7d5","Type":"ContainerStarted","Data":"c8e4866ee6a0b50c78b28733de8f86b24029ba3ab5bef75957813fa86846c2ea"} Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.559232 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b37eebb5-17ce-4765-a7d5-022abb215816","Type":"ContainerStarted","Data":"44a1791b60efa384fe5cce321b2ee182f04c2c8bd1af36b2bac01930e8fd0d9c"} Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.585156 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6fdc6346-d5eb-4696-ac1a-a05f59542183","Type":"ContainerDied","Data":"5603592da6941440d2ca283fa32f9878d1ac2ac3dff28a35046ffed0243e2405"} Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.585207 4727 scope.go:117] "RemoveContainer" containerID="175ccc193ebb6d1c6b2006897b5871efd56a5860ed02a65994fbe938dd2b0898" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.585395 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.608531 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.7885560179999995 podStartE2EDuration="9.608499873s" podCreationTimestamp="2025-12-10 14:57:49 +0000 UTC" firstStartedPulling="2025-12-10 14:57:50.979315694 +0000 UTC m=+1575.174090236" lastFinishedPulling="2025-12-10 14:57:54.799259549 +0000 UTC m=+1578.994034091" observedRunningTime="2025-12-10 14:57:58.590999146 +0000 UTC m=+1582.785773688" watchObservedRunningTime="2025-12-10 14:57:58.608499873 +0000 UTC m=+1582.803274415" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.701333 4727 scope.go:117] "RemoveContainer" containerID="b94c2bc06513dfc49b4c2a563fe1ba44c624439f0315825eff33ca747ccb02ea" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.731124 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.807088 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.901222 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:57:58 crc kubenswrapper[4727]: E1210 14:57:58.901669 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerName="cinder-api" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.901689 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerName="cinder-api" Dec 10 14:57:58 crc kubenswrapper[4727]: E1210 14:57:58.901718 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerName="cinder-api-log" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.901725 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerName="cinder-api-log" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.901937 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerName="cinder-api" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.901958 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fdc6346-d5eb-4696-ac1a-a05f59542183" containerName="cinder-api-log" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.912109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.918582 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.919059 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.918746 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.921080 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.921148 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-config-data\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.921181 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwws\" (UniqueName: \"kubernetes.io/projected/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-kube-api-access-jlwws\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.921204 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-config-data-custom\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.921221 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-scripts\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.921249 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.921310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.921364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-logs\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.921414 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:58 crc kubenswrapper[4727]: I1210 14:57:58.935839 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.027192 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.027465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-config-data\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.027584 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwws\" (UniqueName: \"kubernetes.io/projected/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-kube-api-access-jlwws\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.027658 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-config-data-custom\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.027743 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-scripts\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.027832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.027988 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.028125 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-logs\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.028230 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.037194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.037831 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-logs\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.038800 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.046595 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-config-data\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.047560 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.048301 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.048695 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-scripts\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.051563 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-config-data-custom\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.078791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwws\" (UniqueName: \"kubernetes.io/projected/1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054-kube-api-access-jlwws\") pod \"cinder-api-0\" (UID: \"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054\") " pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.239613 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.436971 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.523633 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f6784f8dd-pcqm7" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.656264 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-778db78b58-dzbj2"] Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.656492 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-778db78b58-dzbj2" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api-log" containerID="cri-o://f0c8141debefa58e1c6d3593250cb651937fad495bc54c7e0a96754370ba8bb2" gracePeriod=30 Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.656583 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-778db78b58-dzbj2" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api" containerID="cri-o://9a5b92a092e819457258571ab100a96e2c2517ff7ab2a193786ca67a8faa5732" gracePeriod=30 Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.664705 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerStarted","Data":"f2b84b2705e5bf76ab581f55589f15eccc5771d8078211ae5ff2838b0665a04e"} Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.682348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78f5bcc65c-vgwfj" event={"ID":"b4032a4d-aa42-4515-a09c-6647e6d0b7d5","Type":"ContainerStarted","Data":"f352d79cb94258c88c76c4ea3f4ca946b60addbc6cae5c4955755fb2e4795816"} Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.683721 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.694009 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.713255 4727 generic.go:334] "Generic (PLEG): container finished" podID="8ef9b442-2a15-4657-b111-5af4a72d39e4" containerID="629cf608530b2484323c36369fbc50bcd816b4833942e8e4535ce7e7bdae36c1" exitCode=0 Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.713330 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-rnv6s" event={"ID":"8ef9b442-2a15-4657-b111-5af4a72d39e4","Type":"ContainerDied","Data":"629cf608530b2484323c36369fbc50bcd816b4833942e8e4535ce7e7bdae36c1"} Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.745708 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78f5bcc65c-vgwfj" podStartSLOduration=4.745677614 podStartE2EDuration="4.745677614s" podCreationTimestamp="2025-12-10 14:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:59.721119542 +0000 UTC m=+1583.915894084" watchObservedRunningTime="2025-12-10 14:57:59.745677614 +0000 UTC m=+1583.940452156" Dec 10 14:57:59 crc kubenswrapper[4727]: I1210 14:57:59.992609 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:58:00 crc kubenswrapper[4727]: I1210 14:58:00.065171 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:58:00 crc kubenswrapper[4727]: I1210 14:58:00.169888 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-njbjk"] Dec 10 14:58:00 crc kubenswrapper[4727]: I1210 14:58:00.170200 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" podUID="6f5c8dd7-8982-4b05-9582-6fce29de5659" containerName="dnsmasq-dns" containerID="cri-o://1dbf83de38df47538b7f0728c06d69d5ec139d286b30cfb94efd537436a1f84d" gracePeriod=10 Dec 10 14:58:00 crc kubenswrapper[4727]: I1210 14:58:00.593871 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fdc6346-d5eb-4696-ac1a-a05f59542183" path="/var/lib/kubelet/pods/6fdc6346-d5eb-4696-ac1a-a05f59542183/volumes" Dec 10 14:58:00 crc kubenswrapper[4727]: I1210 14:58:00.750876 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054","Type":"ContainerStarted","Data":"a0e3f2443ab7f2555dbb0ea494f470913d57afa3758449d6da2f8e50324c3f2c"} Dec 10 14:58:00 crc kubenswrapper[4727]: I1210 14:58:00.752461 4727 generic.go:334] "Generic (PLEG): container finished" podID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerID="f0c8141debefa58e1c6d3593250cb651937fad495bc54c7e0a96754370ba8bb2" exitCode=143 Dec 10 14:58:00 crc kubenswrapper[4727]: I1210 14:58:00.752676 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778db78b58-dzbj2" event={"ID":"0f601b3c-a4c8-4948-a149-784b4d88a0b4","Type":"ContainerDied","Data":"f0c8141debefa58e1c6d3593250cb651937fad495bc54c7e0a96754370ba8bb2"} Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.349073 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.527213 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-config-data\") pod \"8ef9b442-2a15-4657-b111-5af4a72d39e4\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.527608 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-certs\") pod \"8ef9b442-2a15-4657-b111-5af4a72d39e4\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.527672 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg2k8\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-kube-api-access-rg2k8\") pod \"8ef9b442-2a15-4657-b111-5af4a72d39e4\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.527892 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-scripts\") pod \"8ef9b442-2a15-4657-b111-5af4a72d39e4\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.527947 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-combined-ca-bundle\") pod \"8ef9b442-2a15-4657-b111-5af4a72d39e4\" (UID: \"8ef9b442-2a15-4657-b111-5af4a72d39e4\") " Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.542518 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-scripts" (OuterVolumeSpecName: "scripts") pod "8ef9b442-2a15-4657-b111-5af4a72d39e4" (UID: "8ef9b442-2a15-4657-b111-5af4a72d39e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.544794 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-certs" (OuterVolumeSpecName: "certs") pod "8ef9b442-2a15-4657-b111-5af4a72d39e4" (UID: "8ef9b442-2a15-4657-b111-5af4a72d39e4"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.560768 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-kube-api-access-rg2k8" (OuterVolumeSpecName: "kube-api-access-rg2k8") pod "8ef9b442-2a15-4657-b111-5af4a72d39e4" (UID: "8ef9b442-2a15-4657-b111-5af4a72d39e4"). InnerVolumeSpecName "kube-api-access-rg2k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.605080 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ef9b442-2a15-4657-b111-5af4a72d39e4" (UID: "8ef9b442-2a15-4657-b111-5af4a72d39e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.632042 4727 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.632088 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg2k8\" (UniqueName: \"kubernetes.io/projected/8ef9b442-2a15-4657-b111-5af4a72d39e4-kube-api-access-rg2k8\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.632098 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.632107 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.656604 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-config-data" (OuterVolumeSpecName: "config-data") pod "8ef9b442-2a15-4657-b111-5af4a72d39e4" (UID: "8ef9b442-2a15-4657-b111-5af4a72d39e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.734826 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef9b442-2a15-4657-b111-5af4a72d39e4-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.778595 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-rnv6s" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.779051 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-rnv6s" event={"ID":"8ef9b442-2a15-4657-b111-5af4a72d39e4","Type":"ContainerDied","Data":"c30c0c741754a30ceaa30e94b34871514df7832945ca996444bbc1fb1e2953e0"} Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.779090 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c30c0c741754a30ceaa30e94b34871514df7832945ca996444bbc1fb1e2953e0" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.895531 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-twwbm"] Dec 10 14:58:01 crc kubenswrapper[4727]: E1210 14:58:01.896311 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef9b442-2a15-4657-b111-5af4a72d39e4" containerName="cloudkitty-db-sync" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.896431 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef9b442-2a15-4657-b111-5af4a72d39e4" containerName="cloudkitty-db-sync" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.896694 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef9b442-2a15-4657-b111-5af4a72d39e4" containerName="cloudkitty-db-sync" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.899240 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.902796 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-jlplt" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.903149 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.911416 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.911597 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.912224 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 10 14:58:01 crc kubenswrapper[4727]: I1210 14:58:01.930512 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-twwbm"] Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.046473 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-certs\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.046683 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-config-data\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.047218 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-scripts\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.047293 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrwf4\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-kube-api-access-jrwf4\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.047464 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-combined-ca-bundle\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.149229 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-scripts\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.149310 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwf4\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-kube-api-access-jrwf4\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.149404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-combined-ca-bundle\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.149443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-certs\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.149486 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-config-data\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.155785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-certs\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.157066 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-combined-ca-bundle\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.161640 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-scripts\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.171451 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwf4\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-kube-api-access-jrwf4\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.175133 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-config-data\") pod \"cloudkitty-storageinit-twwbm\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.244523 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.840279 4727 generic.go:334] "Generic (PLEG): container finished" podID="6f5c8dd7-8982-4b05-9582-6fce29de5659" containerID="1dbf83de38df47538b7f0728c06d69d5ec139d286b30cfb94efd537436a1f84d" exitCode=0 Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.840570 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" event={"ID":"6f5c8dd7-8982-4b05-9582-6fce29de5659","Type":"ContainerDied","Data":"1dbf83de38df47538b7f0728c06d69d5ec139d286b30cfb94efd537436a1f84d"} Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.871929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-twwbm"] Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.888851 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:58:02 crc kubenswrapper[4727]: I1210 14:58:02.888960 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f9b858948-mx57t" Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.217769 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-778db78b58-dzbj2" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": dial tcp 10.217.0.171:9311: connect: connection refused" Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.218037 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-778db78b58-dzbj2" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": dial tcp 10.217.0.171:9311: connect: connection refused" Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.690322 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.879813 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-swift-storage-0\") pod \"6f5c8dd7-8982-4b05-9582-6fce29de5659\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.879923 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fth2t\" (UniqueName: \"kubernetes.io/projected/6f5c8dd7-8982-4b05-9582-6fce29de5659-kube-api-access-fth2t\") pod \"6f5c8dd7-8982-4b05-9582-6fce29de5659\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.879952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-svc\") pod \"6f5c8dd7-8982-4b05-9582-6fce29de5659\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.880012 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-sb\") pod \"6f5c8dd7-8982-4b05-9582-6fce29de5659\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.880140 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-config\") pod \"6f5c8dd7-8982-4b05-9582-6fce29de5659\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.880175 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-nb\") pod \"6f5c8dd7-8982-4b05-9582-6fce29de5659\" (UID: \"6f5c8dd7-8982-4b05-9582-6fce29de5659\") " Dec 10 14:58:03 crc kubenswrapper[4727]: E1210 14:58:03.890296 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f601b3c_a4c8_4948_a149_784b4d88a0b4.slice/crio-9a5b92a092e819457258571ab100a96e2c2517ff7ab2a193786ca67a8faa5732.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f601b3c_a4c8_4948_a149_784b4d88a0b4.slice/crio-conmon-9a5b92a092e819457258571ab100a96e2c2517ff7ab2a193786ca67a8faa5732.scope\": RecentStats: unable to find data in memory cache]" Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.922918 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5c8dd7-8982-4b05-9582-6fce29de5659-kube-api-access-fth2t" (OuterVolumeSpecName: "kube-api-access-fth2t") pod "6f5c8dd7-8982-4b05-9582-6fce29de5659" (UID: "6f5c8dd7-8982-4b05-9582-6fce29de5659"). InnerVolumeSpecName "kube-api-access-fth2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.928643 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054","Type":"ContainerStarted","Data":"6ca56999da92694981d8758381ac987e43241e90f92ca83a8e117c0c9349e770"} Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.979200 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-twwbm" event={"ID":"94e96924-cc06-48cd-a531-b0d5714f0d1c","Type":"ContainerStarted","Data":"e185c1c424a3fde4684e68da4b83ef178cbf765cbd64567d8c1ebc2bf36a6f7b"} Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.979762 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-twwbm" event={"ID":"94e96924-cc06-48cd-a531-b0d5714f0d1c","Type":"ContainerStarted","Data":"c617643a51cf83c21769b80fa94237f96e2966a09fc54db1d4268ccebf9c6669"} Dec 10 14:58:03 crc kubenswrapper[4727]: I1210 14:58:03.983093 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fth2t\" (UniqueName: \"kubernetes.io/projected/6f5c8dd7-8982-4b05-9582-6fce29de5659-kube-api-access-fth2t\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.001499 4727 generic.go:334] "Generic (PLEG): container finished" podID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerID="9a5b92a092e819457258571ab100a96e2c2517ff7ab2a193786ca67a8faa5732" exitCode=0 Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.001617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778db78b58-dzbj2" event={"ID":"0f601b3c-a4c8-4948-a149-784b4d88a0b4","Type":"ContainerDied","Data":"9a5b92a092e819457258571ab100a96e2c2517ff7ab2a193786ca67a8faa5732"} Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.023787 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-twwbm" podStartSLOduration=3.023766781 podStartE2EDuration="3.023766781s" podCreationTimestamp="2025-12-10 14:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:04.001943188 +0000 UTC m=+1588.196717730" watchObservedRunningTime="2025-12-10 14:58:04.023766781 +0000 UTC m=+1588.218541343" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.045609 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerStarted","Data":"1b22da9a800bedeed241c7d8a4f2abf2e40a1b9db07f6532df08d01a37489cef"} Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.065778 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" event={"ID":"6f5c8dd7-8982-4b05-9582-6fce29de5659","Type":"ContainerDied","Data":"df82ed8a2c185aaf6c8fc29aeb28d917f659e5d352b53d1808a6230ea51c1032"} Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.065831 4727 scope.go:117] "RemoveContainer" containerID="1dbf83de38df47538b7f0728c06d69d5ec139d286b30cfb94efd537436a1f84d" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.066017 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.275289 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f5c8dd7-8982-4b05-9582-6fce29de5659" (UID: "6f5c8dd7-8982-4b05-9582-6fce29de5659"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.280515 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-config" (OuterVolumeSpecName: "config") pod "6f5c8dd7-8982-4b05-9582-6fce29de5659" (UID: "6f5c8dd7-8982-4b05-9582-6fce29de5659"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.296223 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f5c8dd7-8982-4b05-9582-6fce29de5659" (UID: "6f5c8dd7-8982-4b05-9582-6fce29de5659"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.311237 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f5c8dd7-8982-4b05-9582-6fce29de5659" (UID: "6f5c8dd7-8982-4b05-9582-6fce29de5659"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.329262 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.329308 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.329322 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.329337 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.356245 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f5c8dd7-8982-4b05-9582-6fce29de5659" (UID: "6f5c8dd7-8982-4b05-9582-6fce29de5659"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.444653 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f5c8dd7-8982-4b05-9582-6fce29de5659-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.503530 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.512323 4727 scope.go:117] "RemoveContainer" containerID="f8fe4d9c1a8bbbf6228060783317dc49068d03b23520b971f0a888488ba37660" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.526994 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-njbjk"] Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.546086 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data\") pod \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.546160 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f601b3c-a4c8-4948-a149-784b4d88a0b4-logs\") pod \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.546199 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data-custom\") pod \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.546323 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-combined-ca-bundle\") pod \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.546375 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjt7d\" (UniqueName: \"kubernetes.io/projected/0f601b3c-a4c8-4948-a149-784b4d88a0b4-kube-api-access-bjt7d\") pod \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\" (UID: \"0f601b3c-a4c8-4948-a149-784b4d88a0b4\") " Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.547281 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f601b3c-a4c8-4948-a149-784b4d88a0b4-logs" (OuterVolumeSpecName: "logs") pod "0f601b3c-a4c8-4948-a149-784b4d88a0b4" (UID: "0f601b3c-a4c8-4948-a149-784b4d88a0b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.564293 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f601b3c-a4c8-4948-a149-784b4d88a0b4-kube-api-access-bjt7d" (OuterVolumeSpecName: "kube-api-access-bjt7d") pod "0f601b3c-a4c8-4948-a149-784b4d88a0b4" (UID: "0f601b3c-a4c8-4948-a149-784b4d88a0b4"). InnerVolumeSpecName "kube-api-access-bjt7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.564565 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f601b3c-a4c8-4948-a149-784b4d88a0b4" (UID: "0f601b3c-a4c8-4948-a149-784b4d88a0b4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.618267 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-njbjk"] Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.618369 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-584578f496-pdfh5" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.620190 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f601b3c-a4c8-4948-a149-784b4d88a0b4" (UID: "0f601b3c-a4c8-4948-a149-784b4d88a0b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.650044 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.650079 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.650092 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjt7d\" (UniqueName: \"kubernetes.io/projected/0f601b3c-a4c8-4948-a149-784b4d88a0b4-kube-api-access-bjt7d\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.650106 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f601b3c-a4c8-4948-a149-784b4d88a0b4-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.651044 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data" (OuterVolumeSpecName: "config-data") pod "0f601b3c-a4c8-4948-a149-784b4d88a0b4" (UID: "0f601b3c-a4c8-4948-a149-784b4d88a0b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:04 crc kubenswrapper[4727]: I1210 14:58:04.751978 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f601b3c-a4c8-4948-a149-784b4d88a0b4-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.092516 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.103749 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054","Type":"ContainerStarted","Data":"6413672ce4cf703e59c1bfdfb7f8ae40e0938118858f00cbb692f8df5594c846"} Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.103898 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.112498 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778db78b58-dzbj2" event={"ID":"0f601b3c-a4c8-4948-a149-784b4d88a0b4","Type":"ContainerDied","Data":"cdb0cebdd95e7059e43ef6da7531666d9c08f41efc7b7f83ae31b78f00253f39"} Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.112554 4727 scope.go:117] "RemoveContainer" containerID="9a5b92a092e819457258571ab100a96e2c2517ff7ab2a193786ca67a8faa5732" Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.112665 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778db78b58-dzbj2" Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.123793 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerStarted","Data":"9d334b6d5d8840f3dfbbda50d5a923ea73180232ef05d9b782dc164dc66aa80f"} Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.157107 4727 scope.go:117] "RemoveContainer" containerID="f0c8141debefa58e1c6d3593250cb651937fad495bc54c7e0a96754370ba8bb2" Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.165528 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.165503075 podStartE2EDuration="7.165503075s" podCreationTimestamp="2025-12-10 14:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:05.153389773 +0000 UTC m=+1589.348164325" watchObservedRunningTime="2025-12-10 14:58:05.165503075 +0000 UTC m=+1589.360277617" Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.235992 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.245024 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-778db78b58-dzbj2"] Dec 10 14:58:05 crc kubenswrapper[4727]: I1210 14:58:05.256873 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-778db78b58-dzbj2"] Dec 10 14:58:06 crc kubenswrapper[4727]: I1210 14:58:06.138491 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b37eebb5-17ce-4765-a7d5-022abb215816" containerName="cinder-scheduler" containerID="cri-o://d6719c22672611bd03fb74756eb50fa06144a8269f3aa6ed950f98b154dfb899" gracePeriod=30 Dec 10 14:58:06 crc kubenswrapper[4727]: I1210 14:58:06.139222 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b37eebb5-17ce-4765-a7d5-022abb215816" containerName="probe" containerID="cri-o://44a1791b60efa384fe5cce321b2ee182f04c2c8bd1af36b2bac01930e8fd0d9c" gracePeriod=30 Dec 10 14:58:06 crc kubenswrapper[4727]: I1210 14:58:06.585103 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" path="/var/lib/kubelet/pods/0f601b3c-a4c8-4948-a149-784b4d88a0b4/volumes" Dec 10 14:58:06 crc kubenswrapper[4727]: I1210 14:58:06.586153 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5c8dd7-8982-4b05-9582-6fce29de5659" path="/var/lib/kubelet/pods/6f5c8dd7-8982-4b05-9582-6fce29de5659/volumes" Dec 10 14:58:07 crc kubenswrapper[4727]: I1210 14:58:07.172072 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerStarted","Data":"6b78cea1f98c9453122264b36a0b51ee3f9ec92d00425150a69df34092154c8c"} Dec 10 14:58:07 crc kubenswrapper[4727]: I1210 14:58:07.172823 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 14:58:07 crc kubenswrapper[4727]: I1210 14:58:07.206407 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7196182 podStartE2EDuration="12.206389048s" podCreationTimestamp="2025-12-10 14:57:55 +0000 UTC" firstStartedPulling="2025-12-10 14:57:56.722705916 +0000 UTC m=+1580.917480458" lastFinishedPulling="2025-12-10 14:58:06.209476764 +0000 UTC m=+1590.404251306" observedRunningTime="2025-12-10 14:58:07.197753178 +0000 UTC m=+1591.392527720" watchObservedRunningTime="2025-12-10 14:58:07.206389048 +0000 UTC m=+1591.401163580" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.108692 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 10 14:58:08 crc kubenswrapper[4727]: E1210 14:58:08.109225 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.109248 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api" Dec 10 14:58:08 crc kubenswrapper[4727]: E1210 14:58:08.109272 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5c8dd7-8982-4b05-9582-6fce29de5659" containerName="dnsmasq-dns" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.109280 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5c8dd7-8982-4b05-9582-6fce29de5659" containerName="dnsmasq-dns" Dec 10 14:58:08 crc kubenswrapper[4727]: E1210 14:58:08.109300 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api-log" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.109308 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api-log" Dec 10 14:58:08 crc kubenswrapper[4727]: E1210 14:58:08.109347 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5c8dd7-8982-4b05-9582-6fce29de5659" containerName="init" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.109355 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5c8dd7-8982-4b05-9582-6fce29de5659" containerName="init" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.109596 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.109615 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5c8dd7-8982-4b05-9582-6fce29de5659" containerName="dnsmasq-dns" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.109645 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f601b3c-a4c8-4948-a149-784b4d88a0b4" containerName="barbican-api-log" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.110423 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.113277 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.113526 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.114220 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-v2gxv" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.129859 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.180634 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59mt\" (UniqueName: \"kubernetes.io/projected/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-kube-api-access-j59mt\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.181136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.181195 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-openstack-config-secret\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.181605 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-openstack-config\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.186840 4727 generic.go:334] "Generic (PLEG): container finished" podID="b37eebb5-17ce-4765-a7d5-022abb215816" containerID="44a1791b60efa384fe5cce321b2ee182f04c2c8bd1af36b2bac01930e8fd0d9c" exitCode=0 Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.186965 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b37eebb5-17ce-4765-a7d5-022abb215816","Type":"ContainerDied","Data":"44a1791b60efa384fe5cce321b2ee182f04c2c8bd1af36b2bac01930e8fd0d9c"} Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.218004 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586bdc5f9-njbjk" podUID="6f5c8dd7-8982-4b05-9582-6fce29de5659" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.170:5353: i/o timeout" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.282489 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59mt\" (UniqueName: \"kubernetes.io/projected/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-kube-api-access-j59mt\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.282591 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.282621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-openstack-config-secret\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.282717 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-openstack-config\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.283830 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-openstack-config\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.291500 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-openstack-config-secret\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.291728 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.307320 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59mt\" (UniqueName: \"kubernetes.io/projected/d3fd12da-d7cc-49bc-b30b-346a7dd11f92-kube-api-access-j59mt\") pod \"openstackclient\" (UID: \"d3fd12da-d7cc-49bc-b30b-346a7dd11f92\") " pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.436761 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 14:58:08 crc kubenswrapper[4727]: I1210 14:58:08.983463 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.213215 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d3fd12da-d7cc-49bc-b30b-346a7dd11f92","Type":"ContainerStarted","Data":"77b0203b05fe819de8a326fb3138f41b7f6d560bd1c0b4f4361ede0ab80b5c18"} Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.215935 4727 generic.go:334] "Generic (PLEG): container finished" podID="b37eebb5-17ce-4765-a7d5-022abb215816" containerID="d6719c22672611bd03fb74756eb50fa06144a8269f3aa6ed950f98b154dfb899" exitCode=0 Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.216004 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b37eebb5-17ce-4765-a7d5-022abb215816","Type":"ContainerDied","Data":"d6719c22672611bd03fb74756eb50fa06144a8269f3aa6ed950f98b154dfb899"} Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.581847 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.627391 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b37eebb5-17ce-4765-a7d5-022abb215816-etc-machine-id\") pod \"b37eebb5-17ce-4765-a7d5-022abb215816\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.627514 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb27p\" (UniqueName: \"kubernetes.io/projected/b37eebb5-17ce-4765-a7d5-022abb215816-kube-api-access-wb27p\") pod \"b37eebb5-17ce-4765-a7d5-022abb215816\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.627652 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b37eebb5-17ce-4765-a7d5-022abb215816-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b37eebb5-17ce-4765-a7d5-022abb215816" (UID: "b37eebb5-17ce-4765-a7d5-022abb215816"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.627763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data\") pod \"b37eebb5-17ce-4765-a7d5-022abb215816\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.627813 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-combined-ca-bundle\") pod \"b37eebb5-17ce-4765-a7d5-022abb215816\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.627923 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-scripts\") pod \"b37eebb5-17ce-4765-a7d5-022abb215816\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.628026 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data-custom\") pod \"b37eebb5-17ce-4765-a7d5-022abb215816\" (UID: \"b37eebb5-17ce-4765-a7d5-022abb215816\") " Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.628937 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b37eebb5-17ce-4765-a7d5-022abb215816-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.636211 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37eebb5-17ce-4765-a7d5-022abb215816-kube-api-access-wb27p" (OuterVolumeSpecName: "kube-api-access-wb27p") pod "b37eebb5-17ce-4765-a7d5-022abb215816" (UID: "b37eebb5-17ce-4765-a7d5-022abb215816"). InnerVolumeSpecName "kube-api-access-wb27p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.640103 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-scripts" (OuterVolumeSpecName: "scripts") pod "b37eebb5-17ce-4765-a7d5-022abb215816" (UID: "b37eebb5-17ce-4765-a7d5-022abb215816"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.649537 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b37eebb5-17ce-4765-a7d5-022abb215816" (UID: "b37eebb5-17ce-4765-a7d5-022abb215816"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.718103 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b37eebb5-17ce-4765-a7d5-022abb215816" (UID: "b37eebb5-17ce-4765-a7d5-022abb215816"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.731128 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.731181 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb27p\" (UniqueName: \"kubernetes.io/projected/b37eebb5-17ce-4765-a7d5-022abb215816-kube-api-access-wb27p\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.731198 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.731210 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.871592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data" (OuterVolumeSpecName: "config-data") pod "b37eebb5-17ce-4765-a7d5-022abb215816" (UID: "b37eebb5-17ce-4765-a7d5-022abb215816"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:09 crc kubenswrapper[4727]: I1210 14:58:09.935049 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37eebb5-17ce-4765-a7d5-022abb215816-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.229743 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b37eebb5-17ce-4765-a7d5-022abb215816","Type":"ContainerDied","Data":"f7f35d46d358f40384329d3c3ba534736bb4c7d5a02cf4f8596fe54badc8c0d0"} Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.229781 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.230146 4727 scope.go:117] "RemoveContainer" containerID="44a1791b60efa384fe5cce321b2ee182f04c2c8bd1af36b2bac01930e8fd0d9c" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.233664 4727 generic.go:334] "Generic (PLEG): container finished" podID="94e96924-cc06-48cd-a531-b0d5714f0d1c" containerID="e185c1c424a3fde4684e68da4b83ef178cbf765cbd64567d8c1ebc2bf36a6f7b" exitCode=0 Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.233727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-twwbm" event={"ID":"94e96924-cc06-48cd-a531-b0d5714f0d1c","Type":"ContainerDied","Data":"e185c1c424a3fde4684e68da4b83ef178cbf765cbd64567d8c1ebc2bf36a6f7b"} Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.262606 4727 scope.go:117] "RemoveContainer" containerID="d6719c22672611bd03fb74756eb50fa06144a8269f3aa6ed950f98b154dfb899" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.312199 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.330659 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.366437 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:58:10 crc kubenswrapper[4727]: E1210 14:58:10.366990 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37eebb5-17ce-4765-a7d5-022abb215816" containerName="probe" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.367006 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37eebb5-17ce-4765-a7d5-022abb215816" containerName="probe" Dec 10 14:58:10 crc kubenswrapper[4727]: E1210 14:58:10.367021 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37eebb5-17ce-4765-a7d5-022abb215816" containerName="cinder-scheduler" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.367027 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37eebb5-17ce-4765-a7d5-022abb215816" containerName="cinder-scheduler" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.367247 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37eebb5-17ce-4765-a7d5-022abb215816" containerName="probe" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.367273 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37eebb5-17ce-4765-a7d5-022abb215816" containerName="cinder-scheduler" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.368502 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.384386 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.386340 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.452464 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkl5q\" (UniqueName: \"kubernetes.io/projected/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-kube-api-access-vkl5q\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.452545 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.452574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.452637 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-config-data\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.452675 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-scripts\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.452719 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.554504 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkl5q\" (UniqueName: \"kubernetes.io/projected/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-kube-api-access-vkl5q\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.554592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.554628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.554720 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-config-data\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.554774 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-scripts\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.554709 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.554831 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.559942 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.560780 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-config-data\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.561575 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.570651 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-scripts\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.581660 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37eebb5-17ce-4765-a7d5-022abb215816" path="/var/lib/kubelet/pods/b37eebb5-17ce-4765-a7d5-022abb215816/volumes" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.584563 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkl5q\" (UniqueName: \"kubernetes.io/projected/51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a-kube-api-access-vkl5q\") pod \"cinder-scheduler-0\" (UID: \"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a\") " pod="openstack/cinder-scheduler-0" Dec 10 14:58:10 crc kubenswrapper[4727]: I1210 14:58:10.710002 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 14:58:11 crc kubenswrapper[4727]: I1210 14:58:11.268791 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:58:11 crc kubenswrapper[4727]: I1210 14:58:11.759672 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 10 14:58:11 crc kubenswrapper[4727]: I1210 14:58:11.829178 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:11 crc kubenswrapper[4727]: I1210 14:58:11.994163 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrwf4\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-kube-api-access-jrwf4\") pod \"94e96924-cc06-48cd-a531-b0d5714f0d1c\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " Dec 10 14:58:11 crc kubenswrapper[4727]: I1210 14:58:11.994359 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-certs\") pod \"94e96924-cc06-48cd-a531-b0d5714f0d1c\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " Dec 10 14:58:11 crc kubenswrapper[4727]: I1210 14:58:11.994425 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-scripts\") pod \"94e96924-cc06-48cd-a531-b0d5714f0d1c\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " Dec 10 14:58:11 crc kubenswrapper[4727]: I1210 14:58:11.994517 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-combined-ca-bundle\") pod \"94e96924-cc06-48cd-a531-b0d5714f0d1c\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " Dec 10 14:58:11 crc kubenswrapper[4727]: I1210 14:58:11.994768 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-config-data\") pod \"94e96924-cc06-48cd-a531-b0d5714f0d1c\" (UID: \"94e96924-cc06-48cd-a531-b0d5714f0d1c\") " Dec 10 14:58:11 crc kubenswrapper[4727]: I1210 14:58:11.999386 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-scripts" (OuterVolumeSpecName: "scripts") pod "94e96924-cc06-48cd-a531-b0d5714f0d1c" (UID: "94e96924-cc06-48cd-a531-b0d5714f0d1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.003063 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-certs" (OuterVolumeSpecName: "certs") pod "94e96924-cc06-48cd-a531-b0d5714f0d1c" (UID: "94e96924-cc06-48cd-a531-b0d5714f0d1c"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.009487 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-kube-api-access-jrwf4" (OuterVolumeSpecName: "kube-api-access-jrwf4") pod "94e96924-cc06-48cd-a531-b0d5714f0d1c" (UID: "94e96924-cc06-48cd-a531-b0d5714f0d1c"). InnerVolumeSpecName "kube-api-access-jrwf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.047007 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-config-data" (OuterVolumeSpecName: "config-data") pod "94e96924-cc06-48cd-a531-b0d5714f0d1c" (UID: "94e96924-cc06-48cd-a531-b0d5714f0d1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.057573 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94e96924-cc06-48cd-a531-b0d5714f0d1c" (UID: "94e96924-cc06-48cd-a531-b0d5714f0d1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.099069 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.099431 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrwf4\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-kube-api-access-jrwf4\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.099455 4727 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/94e96924-cc06-48cd-a531-b0d5714f0d1c-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.099467 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.099481 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e96924-cc06-48cd-a531-b0d5714f0d1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.262362 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a","Type":"ContainerStarted","Data":"e8675beb3c35d173e2c4861ee7109e4ba6388291b7622ebdb0075380db6c089d"} Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.271422 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-twwbm" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.271499 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-twwbm" event={"ID":"94e96924-cc06-48cd-a531-b0d5714f0d1c","Type":"ContainerDied","Data":"c617643a51cf83c21769b80fa94237f96e2966a09fc54db1d4268ccebf9c6669"} Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.271534 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c617643a51cf83c21769b80fa94237f96e2966a09fc54db1d4268ccebf9c6669" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.624896 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 14:58:12 crc kubenswrapper[4727]: E1210 14:58:12.625632 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e96924-cc06-48cd-a531-b0d5714f0d1c" containerName="cloudkitty-storageinit" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.625656 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e96924-cc06-48cd-a531-b0d5714f0d1c" containerName="cloudkitty-storageinit" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.627562 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e96924-cc06-48cd-a531-b0d5714f0d1c" containerName="cloudkitty-storageinit" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.628590 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.633653 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.633859 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.634076 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.634078 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.634283 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-jlplt" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.668036 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.697065 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l2cjd"] Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.699352 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.712826 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.712873 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5n4m\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-kube-api-access-s5n4m\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.713111 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-scripts\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.717215 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.717312 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8vf4\" (UniqueName: \"kubernetes.io/projected/2d94959d-f645-4e03-b152-bbd85cf1a91c-kube-api-access-c8vf4\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.717374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.717415 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.717589 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-config\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.717738 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.717850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-svc\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.717929 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.717999 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-certs\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.787106 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l2cjd"] Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820167 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8vf4\" (UniqueName: \"kubernetes.io/projected/2d94959d-f645-4e03-b152-bbd85cf1a91c-kube-api-access-c8vf4\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820313 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-config\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820539 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-svc\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-certs\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820649 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820672 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5n4m\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-kube-api-access-s5n4m\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.820699 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-scripts\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.822874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-svc\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.823430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.823940 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-config\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.824545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.831718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.832277 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.832612 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.833642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.844232 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-certs\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.852083 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-scripts\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.854203 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5n4m\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-kube-api-access-s5n4m\") pod \"cloudkitty-proc-0\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.858590 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8vf4\" (UniqueName: \"kubernetes.io/projected/2d94959d-f645-4e03-b152-bbd85cf1a91c-kube-api-access-c8vf4\") pod \"dnsmasq-dns-67bdc55879-l2cjd\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.894053 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.896232 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.900940 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.929201 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e970c8b-0eb9-4607-9d74-559fdb8bd753-logs\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.929415 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-scripts\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.929444 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-certs\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.932003 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.933450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.933501 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.933627 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vxcx\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-kube-api-access-7vxcx\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.951843 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 14:58:12 crc kubenswrapper[4727]: I1210 14:58:12.984460 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.073580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.073671 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.073797 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vxcx\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-kube-api-access-7vxcx\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.073991 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e970c8b-0eb9-4607-9d74-559fdb8bd753-logs\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.074080 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-scripts\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.074106 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-certs\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.074143 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.078635 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e970c8b-0eb9-4607-9d74-559fdb8bd753-logs\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.086050 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.086535 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.087223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-scripts\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.107425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-certs\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.111194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.115540 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.126107 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vxcx\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-kube-api-access-7vxcx\") pod \"cloudkitty-api-0\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.288948 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a","Type":"ContainerStarted","Data":"83c55fabb0be45043b62b255cd8542d2bb5a3ae7db561f548583e281481c79e7"} Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.307692 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.753484 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 14:58:13 crc kubenswrapper[4727]: I1210 14:58:13.971656 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l2cjd"] Dec 10 14:58:14 crc kubenswrapper[4727]: I1210 14:58:14.057577 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 14:58:14 crc kubenswrapper[4727]: I1210 14:58:14.313092 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"b3ea5c73-8dea-4908-b688-94b7639054da","Type":"ContainerStarted","Data":"6a77a0b6330be636ce047eea90bea9a498291ccb2a00de03dda8a35af65c56df"} Dec 10 14:58:14 crc kubenswrapper[4727]: I1210 14:58:14.321971 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"7e970c8b-0eb9-4607-9d74-559fdb8bd753","Type":"ContainerStarted","Data":"4876c929aadec88b28499372ae8cdd42fafbcd813a8fdbf80b5fe1855ae0a1eb"} Dec 10 14:58:14 crc kubenswrapper[4727]: I1210 14:58:14.332257 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" event={"ID":"2d94959d-f645-4e03-b152-bbd85cf1a91c","Type":"ContainerStarted","Data":"e26dbc0f973fc3b737baea4926ef722e9b02bc7e9842225adfebb97b8f2fead4"} Dec 10 14:58:16 crc kubenswrapper[4727]: I1210 14:58:16.004948 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 14:58:16 crc kubenswrapper[4727]: I1210 14:58:16.400893 4727 generic.go:334] "Generic (PLEG): container finished" podID="2d94959d-f645-4e03-b152-bbd85cf1a91c" containerID="ad6ccaf8a6fcc37aa6ee60e7f21f0188e8b29e2ef9a42590c62227008fb66675" exitCode=0 Dec 10 14:58:16 crc kubenswrapper[4727]: I1210 14:58:16.401256 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" event={"ID":"2d94959d-f645-4e03-b152-bbd85cf1a91c","Type":"ContainerDied","Data":"ad6ccaf8a6fcc37aa6ee60e7f21f0188e8b29e2ef9a42590c62227008fb66675"} Dec 10 14:58:16 crc kubenswrapper[4727]: I1210 14:58:16.416808 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"7e970c8b-0eb9-4607-9d74-559fdb8bd753","Type":"ContainerStarted","Data":"c5d45e81f124a43c54e15b003fcfbdbbc2cb9df351b9d1eff3575afda63341f5"} Dec 10 14:58:16 crc kubenswrapper[4727]: I1210 14:58:16.421259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a","Type":"ContainerStarted","Data":"bdd681785a0867157f010c61dc64cc3b6d7b9f4094c04e6f5f04df85c8476bea"} Dec 10 14:58:16 crc kubenswrapper[4727]: I1210 14:58:16.485945 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.485897705 podStartE2EDuration="6.485897705s" podCreationTimestamp="2025-12-10 14:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:16.460752135 +0000 UTC m=+1600.655526677" watchObservedRunningTime="2025-12-10 14:58:16.485897705 +0000 UTC m=+1600.680672247" Dec 10 14:58:17 crc kubenswrapper[4727]: I1210 14:58:17.439598 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"7e970c8b-0eb9-4607-9d74-559fdb8bd753","Type":"ContainerStarted","Data":"ed471da68915250d3a2c6b07b13ed84c8d0e5b2d20d304af326cc7a4246febcf"} Dec 10 14:58:17 crc kubenswrapper[4727]: I1210 14:58:17.439994 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Dec 10 14:58:17 crc kubenswrapper[4727]: I1210 14:58:17.439819 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerName="cloudkitty-api" containerID="cri-o://ed471da68915250d3a2c6b07b13ed84c8d0e5b2d20d304af326cc7a4246febcf" gracePeriod=30 Dec 10 14:58:17 crc kubenswrapper[4727]: I1210 14:58:17.439774 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerName="cloudkitty-api-log" containerID="cri-o://c5d45e81f124a43c54e15b003fcfbdbbc2cb9df351b9d1eff3575afda63341f5" gracePeriod=30 Dec 10 14:58:17 crc kubenswrapper[4727]: I1210 14:58:17.444160 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" event={"ID":"2d94959d-f645-4e03-b152-bbd85cf1a91c","Type":"ContainerStarted","Data":"05f92f9e106fdc772624f69a999f044d049a14fd8c1748b0967af6baf585831a"} Dec 10 14:58:17 crc kubenswrapper[4727]: I1210 14:58:17.444997 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:17 crc kubenswrapper[4727]: I1210 14:58:17.472978 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=5.472952093 podStartE2EDuration="5.472952093s" podCreationTimestamp="2025-12-10 14:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:17.469986137 +0000 UTC m=+1601.664760699" watchObservedRunningTime="2025-12-10 14:58:17.472952093 +0000 UTC m=+1601.667726635" Dec 10 14:58:17 crc kubenswrapper[4727]: I1210 14:58:17.520750 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" podStartSLOduration=5.520722918 podStartE2EDuration="5.520722918s" podCreationTimestamp="2025-12-10 14:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:17.499228785 +0000 UTC m=+1601.694003337" watchObservedRunningTime="2025-12-10 14:58:17.520722918 +0000 UTC m=+1601.715497460" Dec 10 14:58:18 crc kubenswrapper[4727]: I1210 14:58:18.459116 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerID="c5d45e81f124a43c54e15b003fcfbdbbc2cb9df351b9d1eff3575afda63341f5" exitCode=143 Dec 10 14:58:18 crc kubenswrapper[4727]: I1210 14:58:18.459253 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"7e970c8b-0eb9-4607-9d74-559fdb8bd753","Type":"ContainerDied","Data":"c5d45e81f124a43c54e15b003fcfbdbbc2cb9df351b9d1eff3575afda63341f5"} Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.033322 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7f4548974c-shwfg"] Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.036130 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.039645 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.040096 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.040092 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.049559 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7f4548974c-shwfg"] Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.218343 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dd2b5d5-3831-4578-a676-5338dd451099-log-httpd\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.218434 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dd2b5d5-3831-4578-a676-5338dd451099-run-httpd\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.218493 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-public-tls-certs\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.218563 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-config-data\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.218774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-internal-tls-certs\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.218890 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-combined-ca-bundle\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.219071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj5vm\" (UniqueName: \"kubernetes.io/projected/4dd2b5d5-3831-4578-a676-5338dd451099-kube-api-access-pj5vm\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.219188 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd2b5d5-3831-4578-a676-5338dd451099-etc-swift\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.321034 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-internal-tls-certs\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.321115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-combined-ca-bundle\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.321156 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj5vm\" (UniqueName: \"kubernetes.io/projected/4dd2b5d5-3831-4578-a676-5338dd451099-kube-api-access-pj5vm\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.321206 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd2b5d5-3831-4578-a676-5338dd451099-etc-swift\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.321293 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dd2b5d5-3831-4578-a676-5338dd451099-log-httpd\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.321341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dd2b5d5-3831-4578-a676-5338dd451099-run-httpd\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.321398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-public-tls-certs\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.321443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-config-data\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.322723 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dd2b5d5-3831-4578-a676-5338dd451099-run-httpd\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.323021 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dd2b5d5-3831-4578-a676-5338dd451099-log-httpd\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.328723 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-public-tls-certs\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.329629 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd2b5d5-3831-4578-a676-5338dd451099-etc-swift\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.330965 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-config-data\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.331081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-internal-tls-certs\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.339801 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd2b5d5-3831-4578-a676-5338dd451099-combined-ca-bundle\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.349869 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj5vm\" (UniqueName: \"kubernetes.io/projected/4dd2b5d5-3831-4578-a676-5338dd451099-kube-api-access-pj5vm\") pod \"swift-proxy-7f4548974c-shwfg\" (UID: \"4dd2b5d5-3831-4578-a676-5338dd451099\") " pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.369759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.497553 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"b3ea5c73-8dea-4908-b688-94b7639054da","Type":"ContainerStarted","Data":"a76922ed2286eacbf268a2f9139a9ca65e780dcaf793878fceeb4d735a5258ee"} Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.522083 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.8415256380000002 podStartE2EDuration="7.522061934s" podCreationTimestamp="2025-12-10 14:58:12 +0000 UTC" firstStartedPulling="2025-12-10 14:58:13.760492515 +0000 UTC m=+1597.955267057" lastFinishedPulling="2025-12-10 14:58:18.441028811 +0000 UTC m=+1602.635803353" observedRunningTime="2025-12-10 14:58:19.515956977 +0000 UTC m=+1603.710731519" watchObservedRunningTime="2025-12-10 14:58:19.522061934 +0000 UTC m=+1603.716836476" Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.570882 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 14:58:19 crc kubenswrapper[4727]: I1210 14:58:19.983848 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7f9b8bf876-ww2gz" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 14:58:20 crc kubenswrapper[4727]: I1210 14:58:20.001321 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7f9b8bf876-ww2gz" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 14:58:20 crc kubenswrapper[4727]: I1210 14:58:20.001864 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7f9b8bf876-ww2gz" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 14:58:20 crc kubenswrapper[4727]: I1210 14:58:20.130997 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7f4548974c-shwfg"] Dec 10 14:58:20 crc kubenswrapper[4727]: I1210 14:58:20.514980 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f4548974c-shwfg" event={"ID":"4dd2b5d5-3831-4578-a676-5338dd451099","Type":"ContainerStarted","Data":"bb0d4092903f5c7e8a9a46ee190fcfe3e4d0ee192ffdcd46074a3310080794e4"} Dec 10 14:58:20 crc kubenswrapper[4727]: I1210 14:58:20.711550 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 10 14:58:21 crc kubenswrapper[4727]: I1210 14:58:21.132170 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 10 14:58:21 crc kubenswrapper[4727]: I1210 14:58:21.543201 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f4548974c-shwfg" event={"ID":"4dd2b5d5-3831-4578-a676-5338dd451099","Type":"ContainerStarted","Data":"11ca65a8039d37124cbf7efda4f841185ec3f9690dea8eb6b5adc23b60e802bb"} Dec 10 14:58:21 crc kubenswrapper[4727]: I1210 14:58:21.543376 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="b3ea5c73-8dea-4908-b688-94b7639054da" containerName="cloudkitty-proc" containerID="cri-o://a76922ed2286eacbf268a2f9139a9ca65e780dcaf793878fceeb4d735a5258ee" gracePeriod=30 Dec 10 14:58:21 crc kubenswrapper[4727]: I1210 14:58:21.599775 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:21 crc kubenswrapper[4727]: I1210 14:58:21.600116 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="ceilometer-central-agent" containerID="cri-o://f2b84b2705e5bf76ab581f55589f15eccc5771d8078211ae5ff2838b0665a04e" gracePeriod=30 Dec 10 14:58:21 crc kubenswrapper[4727]: I1210 14:58:21.600289 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="proxy-httpd" containerID="cri-o://6b78cea1f98c9453122264b36a0b51ee3f9ec92d00425150a69df34092154c8c" gracePeriod=30 Dec 10 14:58:21 crc kubenswrapper[4727]: I1210 14:58:21.600353 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="sg-core" containerID="cri-o://9d334b6d5d8840f3dfbbda50d5a923ea73180232ef05d9b782dc164dc66aa80f" gracePeriod=30 Dec 10 14:58:21 crc kubenswrapper[4727]: I1210 14:58:21.600397 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="ceilometer-notification-agent" containerID="cri-o://1b22da9a800bedeed241c7d8a4f2abf2e40a1b9db07f6532df08d01a37489cef" gracePeriod=30 Dec 10 14:58:21 crc kubenswrapper[4727]: I1210 14:58:21.619284 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.178:3000/\": EOF" Dec 10 14:58:22 crc kubenswrapper[4727]: I1210 14:58:22.584027 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:22 crc kubenswrapper[4727]: I1210 14:58:22.584286 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f4548974c-shwfg" event={"ID":"4dd2b5d5-3831-4578-a676-5338dd451099","Type":"ContainerStarted","Data":"52d236bd4eac791eeea6e8f08c60a1651994712300f9407f75642bdca7eeb4ba"} Dec 10 14:58:22 crc kubenswrapper[4727]: I1210 14:58:22.584308 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:22 crc kubenswrapper[4727]: I1210 14:58:22.598723 4727 generic.go:334] "Generic (PLEG): container finished" podID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerID="6b78cea1f98c9453122264b36a0b51ee3f9ec92d00425150a69df34092154c8c" exitCode=0 Dec 10 14:58:22 crc kubenswrapper[4727]: I1210 14:58:22.598758 4727 generic.go:334] "Generic (PLEG): container finished" podID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerID="9d334b6d5d8840f3dfbbda50d5a923ea73180232ef05d9b782dc164dc66aa80f" exitCode=2 Dec 10 14:58:22 crc kubenswrapper[4727]: I1210 14:58:22.598784 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerDied","Data":"6b78cea1f98c9453122264b36a0b51ee3f9ec92d00425150a69df34092154c8c"} Dec 10 14:58:22 crc kubenswrapper[4727]: I1210 14:58:22.598812 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerDied","Data":"9d334b6d5d8840f3dfbbda50d5a923ea73180232ef05d9b782dc164dc66aa80f"} Dec 10 14:58:22 crc kubenswrapper[4727]: I1210 14:58:22.614978 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7f4548974c-shwfg" podStartSLOduration=4.614949578 podStartE2EDuration="4.614949578s" podCreationTimestamp="2025-12-10 14:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:22.596600246 +0000 UTC m=+1606.791374788" watchObservedRunningTime="2025-12-10 14:58:22.614949578 +0000 UTC m=+1606.809724120" Dec 10 14:58:23 crc kubenswrapper[4727]: I1210 14:58:23.089349 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 14:58:23 crc kubenswrapper[4727]: I1210 14:58:23.184516 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lr6qp"] Dec 10 14:58:23 crc kubenswrapper[4727]: I1210 14:58:23.184840 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" podUID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerName="dnsmasq-dns" containerID="cri-o://abcba7042fd38da9aa60a498a6143a2b721ea6adfdc6cebc70454788e93f9427" gracePeriod=10 Dec 10 14:58:23 crc kubenswrapper[4727]: I1210 14:58:23.620494 4727 generic.go:334] "Generic (PLEG): container finished" podID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerID="f2b84b2705e5bf76ab581f55589f15eccc5771d8078211ae5ff2838b0665a04e" exitCode=0 Dec 10 14:58:23 crc kubenswrapper[4727]: I1210 14:58:23.620570 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerDied","Data":"f2b84b2705e5bf76ab581f55589f15eccc5771d8078211ae5ff2838b0665a04e"} Dec 10 14:58:23 crc kubenswrapper[4727]: I1210 14:58:23.631148 4727 generic.go:334] "Generic (PLEG): container finished" podID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerID="abcba7042fd38da9aa60a498a6143a2b721ea6adfdc6cebc70454788e93f9427" exitCode=0 Dec 10 14:58:23 crc kubenswrapper[4727]: I1210 14:58:23.631248 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" event={"ID":"5fcd0f0a-6bef-42d5-a6be-7de638221c2d","Type":"ContainerDied","Data":"abcba7042fd38da9aa60a498a6143a2b721ea6adfdc6cebc70454788e93f9427"} Dec 10 14:58:25 crc kubenswrapper[4727]: I1210 14:58:25.063456 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" podUID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Dec 10 14:58:25 crc kubenswrapper[4727]: I1210 14:58:25.690858 4727 generic.go:334] "Generic (PLEG): container finished" podID="b3ea5c73-8dea-4908-b688-94b7639054da" containerID="a76922ed2286eacbf268a2f9139a9ca65e780dcaf793878fceeb4d735a5258ee" exitCode=0 Dec 10 14:58:25 crc kubenswrapper[4727]: I1210 14:58:25.691007 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"b3ea5c73-8dea-4908-b688-94b7639054da","Type":"ContainerDied","Data":"a76922ed2286eacbf268a2f9139a9ca65e780dcaf793878fceeb4d735a5258ee"} Dec 10 14:58:25 crc kubenswrapper[4727]: I1210 14:58:25.695456 4727 generic.go:334] "Generic (PLEG): container finished" podID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerID="1b22da9a800bedeed241c7d8a4f2abf2e40a1b9db07f6532df08d01a37489cef" exitCode=0 Dec 10 14:58:25 crc kubenswrapper[4727]: I1210 14:58:25.695537 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerDied","Data":"1b22da9a800bedeed241c7d8a4f2abf2e40a1b9db07f6532df08d01a37489cef"} Dec 10 14:58:25 crc kubenswrapper[4727]: I1210 14:58:25.949868 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.178:3000/\": dial tcp 10.217.0.178:3000: connect: connection refused" Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.261389 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78f5bcc65c-vgwfj" Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.337538 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f9b8bf876-ww2gz"] Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.338143 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f9b8bf876-ww2gz" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-api" containerID="cri-o://74aec552eb303bb4a2cc48d7317680363b863600019a3b48bb563ae33d64efc7" gracePeriod=30 Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.339242 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f9b8bf876-ww2gz" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-httpd" containerID="cri-o://47fb4b5341867b21a4284760d2829c377b23bf60434a949563fe300973a2a490" gracePeriod=30 Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.361951 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7f9b8bf876-ww2gz" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.175:9696/\": EOF" Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.442781 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.443087 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-log" containerID="cri-o://441d56117116acd6aaff1cf86c763a2812fe8795fbe5b9ceebbe9962cc222af4" gracePeriod=30 Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.443649 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-httpd" containerID="cri-o://b12e9660bf8585f219d40f7f23ec3334f7caba717c11c222a07e84a444bfeeff" gracePeriod=30 Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.712873 4727 generic.go:334] "Generic (PLEG): container finished" podID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerID="441d56117116acd6aaff1cf86c763a2812fe8795fbe5b9ceebbe9962cc222af4" exitCode=143 Dec 10 14:58:26 crc kubenswrapper[4727]: I1210 14:58:26.712981 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c526c7c-ad39-4172-a998-a6935f2522e2","Type":"ContainerDied","Data":"441d56117116acd6aaff1cf86c763a2812fe8795fbe5b9ceebbe9962cc222af4"} Dec 10 14:58:28 crc kubenswrapper[4727]: I1210 14:58:28.046336 4727 generic.go:334] "Generic (PLEG): container finished" podID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerID="47fb4b5341867b21a4284760d2829c377b23bf60434a949563fe300973a2a490" exitCode=0 Dec 10 14:58:28 crc kubenswrapper[4727]: I1210 14:58:28.046492 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f9b8bf876-ww2gz" event={"ID":"0acfba51-9dd2-48cb-b22d-7a59dff45f74","Type":"ContainerDied","Data":"47fb4b5341867b21a4284760d2829c377b23bf60434a949563fe300973a2a490"} Dec 10 14:58:29 crc kubenswrapper[4727]: I1210 14:58:29.378410 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:29 crc kubenswrapper[4727]: I1210 14:58:29.383737 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7f4548974c-shwfg" Dec 10 14:58:30 crc kubenswrapper[4727]: I1210 14:58:30.062832 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" podUID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Dec 10 14:58:30 crc kubenswrapper[4727]: I1210 14:58:30.612316 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:30 crc kubenswrapper[4727]: I1210 14:58:30.612834 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-log" containerID="cri-o://ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a" gracePeriod=30 Dec 10 14:58:30 crc kubenswrapper[4727]: I1210 14:58:30.613396 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-httpd" containerID="cri-o://3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6" gracePeriod=30 Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.090325 4727 generic.go:334] "Generic (PLEG): container finished" podID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerID="74aec552eb303bb4a2cc48d7317680363b863600019a3b48bb563ae33d64efc7" exitCode=0 Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.090405 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f9b8bf876-ww2gz" event={"ID":"0acfba51-9dd2-48cb-b22d-7a59dff45f74","Type":"ContainerDied","Data":"74aec552eb303bb4a2cc48d7317680363b863600019a3b48bb563ae33d64efc7"} Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.093290 4727 generic.go:334] "Generic (PLEG): container finished" podID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerID="b12e9660bf8585f219d40f7f23ec3334f7caba717c11c222a07e84a444bfeeff" exitCode=0 Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.093352 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c526c7c-ad39-4172-a998-a6935f2522e2","Type":"ContainerDied","Data":"b12e9660bf8585f219d40f7f23ec3334f7caba717c11c222a07e84a444bfeeff"} Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.095757 4727 generic.go:334] "Generic (PLEG): container finished" podID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerID="ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a" exitCode=143 Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.095793 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7eab25e3-c058-4db1-b610-e5394ae0c2c1","Type":"ContainerDied","Data":"ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a"} Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.128115 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": dial tcp 10.217.0.167:9292: connect: connection refused" Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.128493 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": dial tcp 10.217.0.167:9292: connect: connection refused" Dec 10 14:58:31 crc kubenswrapper[4727]: E1210 14:58:31.187335 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Dec 10 14:58:31 crc kubenswrapper[4727]: E1210 14:58:31.188413 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n84h9ch9h64bh684h55fh5b9h568h58bh5b9h655h55dh5ffh5c6h5fch566h65bh5f9h657h5c8h575h544h5bch558h594h66fh5b8h588h5d8h57hd7h554q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j59mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(d3fd12da-d7cc-49bc-b30b-346a7dd11f92): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:58:31 crc kubenswrapper[4727]: E1210 14:58:31.189882 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="d3fd12da-d7cc-49bc-b30b-346a7dd11f92" Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.762706 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.901767 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-combined-ca-bundle\") pod \"b3ea5c73-8dea-4908-b688-94b7639054da\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.901893 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data\") pod \"b3ea5c73-8dea-4908-b688-94b7639054da\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.901985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data-custom\") pod \"b3ea5c73-8dea-4908-b688-94b7639054da\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.902019 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-certs\") pod \"b3ea5c73-8dea-4908-b688-94b7639054da\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.902103 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-scripts\") pod \"b3ea5c73-8dea-4908-b688-94b7639054da\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.902124 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5n4m\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-kube-api-access-s5n4m\") pod \"b3ea5c73-8dea-4908-b688-94b7639054da\" (UID: \"b3ea5c73-8dea-4908-b688-94b7639054da\") " Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.913209 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-kube-api-access-s5n4m" (OuterVolumeSpecName: "kube-api-access-s5n4m") pod "b3ea5c73-8dea-4908-b688-94b7639054da" (UID: "b3ea5c73-8dea-4908-b688-94b7639054da"). InnerVolumeSpecName "kube-api-access-s5n4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.914772 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b3ea5c73-8dea-4908-b688-94b7639054da" (UID: "b3ea5c73-8dea-4908-b688-94b7639054da"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.923842 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-scripts" (OuterVolumeSpecName: "scripts") pod "b3ea5c73-8dea-4908-b688-94b7639054da" (UID: "b3ea5c73-8dea-4908-b688-94b7639054da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.932948 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-certs" (OuterVolumeSpecName: "certs") pod "b3ea5c73-8dea-4908-b688-94b7639054da" (UID: "b3ea5c73-8dea-4908-b688-94b7639054da"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.964080 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data" (OuterVolumeSpecName: "config-data") pod "b3ea5c73-8dea-4908-b688-94b7639054da" (UID: "b3ea5c73-8dea-4908-b688-94b7639054da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:31 crc kubenswrapper[4727]: I1210 14:58:31.971858 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3ea5c73-8dea-4908-b688-94b7639054da" (UID: "b3ea5c73-8dea-4908-b688-94b7639054da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.004579 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.004607 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.004616 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.004625 4727 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.004634 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea5c73-8dea-4908-b688-94b7639054da-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.004642 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5n4m\" (UniqueName: \"kubernetes.io/projected/b3ea5c73-8dea-4908-b688-94b7639054da-kube-api-access-s5n4m\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.107881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c526c7c-ad39-4172-a998-a6935f2522e2","Type":"ContainerDied","Data":"59d8c8c71b15cea0c91fd17fe2e9e76989c04323fa2a373e286b35ba248f017e"} Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.107955 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d8c8c71b15cea0c91fd17fe2e9e76989c04323fa2a373e286b35ba248f017e" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.109790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"b3ea5c73-8dea-4908-b688-94b7639054da","Type":"ContainerDied","Data":"6a77a0b6330be636ce047eea90bea9a498291ccb2a00de03dda8a35af65c56df"} Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.109822 4727 scope.go:117] "RemoveContainer" containerID="a76922ed2286eacbf268a2f9139a9ca65e780dcaf793878fceeb4d735a5258ee" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.110039 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.114603 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c3d3fe1-767d-429b-adcd-72bd15ff6f65","Type":"ContainerDied","Data":"662afca0051167d9299bbf046e00b7d0282f5186333d3f76c9e63509f8c89803"} Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.114647 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="662afca0051167d9299bbf046e00b7d0282f5186333d3f76c9e63509f8c89803" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.121349 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" event={"ID":"5fcd0f0a-6bef-42d5-a6be-7de638221c2d","Type":"ContainerDied","Data":"cec6c4c781e5a34774f96fda57ae0c40e22baa63cb2ea296903311a7fd16b7fa"} Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.121418 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cec6c4c781e5a34774f96fda57ae0c40e22baa63cb2ea296903311a7fd16b7fa" Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.123729 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="d3fd12da-d7cc-49bc-b30b-346a7dd11f92" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.144419 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.158765 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.172539 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.173691 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.193143 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.230762 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.231253 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ea5c73-8dea-4908-b688-94b7639054da" containerName="cloudkitty-proc" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231271 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea5c73-8dea-4908-b688-94b7639054da" containerName="cloudkitty-proc" Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.231289 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="ceilometer-notification-agent" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231298 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="ceilometer-notification-agent" Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.231316 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerName="init" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231322 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerName="init" Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.231334 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerName="dnsmasq-dns" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231340 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerName="dnsmasq-dns" Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.231355 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-log" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231362 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-log" Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.231378 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="ceilometer-central-agent" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231385 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="ceilometer-central-agent" Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.231396 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-httpd" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231402 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-httpd" Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.231414 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="proxy-httpd" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231421 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="proxy-httpd" Dec 10 14:58:32 crc kubenswrapper[4727]: E1210 14:58:32.231429 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="sg-core" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231435 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="sg-core" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231626 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="ceilometer-central-agent" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231644 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" containerName="dnsmasq-dns" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231656 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="proxy-httpd" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231669 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-httpd" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231683 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="ceilometer-notification-agent" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231694 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" containerName="glance-log" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231706 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" containerName="sg-core" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.231713 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ea5c73-8dea-4908-b688-94b7639054da" containerName="cloudkitty-proc" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.232577 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.238629 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.274978 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.311174 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-scripts\") pod \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.311577 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-run-httpd\") pod \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.311717 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-svc\") pod \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.311819 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-config-data\") pod \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.312095 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"1c526c7c-ad39-4172-a998-a6935f2522e2\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.312195 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pp69\" (UniqueName: \"kubernetes.io/projected/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-kube-api-access-5pp69\") pod \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.312286 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-sg-core-conf-yaml\") pod \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.312396 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-logs\") pod \"1c526c7c-ad39-4172-a998-a6935f2522e2\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.312554 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65sgr\" (UniqueName: \"kubernetes.io/projected/1c526c7c-ad39-4172-a998-a6935f2522e2-kube-api-access-65sgr\") pod \"1c526c7c-ad39-4172-a998-a6935f2522e2\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.312709 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-log-httpd\") pod \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.312840 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-swift-storage-0\") pod \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.312937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-internal-tls-certs\") pod \"1c526c7c-ad39-4172-a998-a6935f2522e2\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.313037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hqsm\" (UniqueName: \"kubernetes.io/projected/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-kube-api-access-2hqsm\") pod \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.313143 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-combined-ca-bundle\") pod \"1c526c7c-ad39-4172-a998-a6935f2522e2\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.313268 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-nb\") pod \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.313336 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-scripts\") pod \"1c526c7c-ad39-4172-a998-a6935f2522e2\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.313412 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-config-data\") pod \"1c526c7c-ad39-4172-a998-a6935f2522e2\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.313491 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-config\") pod \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.313585 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-sb\") pod \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\" (UID: \"5fcd0f0a-6bef-42d5-a6be-7de638221c2d\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.313764 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-httpd-run\") pod \"1c526c7c-ad39-4172-a998-a6935f2522e2\" (UID: \"1c526c7c-ad39-4172-a998-a6935f2522e2\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.313840 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-combined-ca-bundle\") pod \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\" (UID: \"9c3d3fe1-767d-429b-adcd-72bd15ff6f65\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.314451 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9c3d3fe1-767d-429b-adcd-72bd15ff6f65" (UID: "9c3d3fe1-767d-429b-adcd-72bd15ff6f65"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.316464 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9c3d3fe1-767d-429b-adcd-72bd15ff6f65" (UID: "9c3d3fe1-767d-429b-adcd-72bd15ff6f65"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.317257 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-logs" (OuterVolumeSpecName: "logs") pod "1c526c7c-ad39-4172-a998-a6935f2522e2" (UID: "1c526c7c-ad39-4172-a998-a6935f2522e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.328836 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-kube-api-access-5pp69" (OuterVolumeSpecName: "kube-api-access-5pp69") pod "9c3d3fe1-767d-429b-adcd-72bd15ff6f65" (UID: "9c3d3fe1-767d-429b-adcd-72bd15ff6f65"). InnerVolumeSpecName "kube-api-access-5pp69". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.329681 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-scripts" (OuterVolumeSpecName: "scripts") pod "1c526c7c-ad39-4172-a998-a6935f2522e2" (UID: "1c526c7c-ad39-4172-a998-a6935f2522e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.336461 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c526c7c-ad39-4172-a998-a6935f2522e2" (UID: "1c526c7c-ad39-4172-a998-a6935f2522e2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.337287 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c526c7c-ad39-4172-a998-a6935f2522e2-kube-api-access-65sgr" (OuterVolumeSpecName: "kube-api-access-65sgr") pod "1c526c7c-ad39-4172-a998-a6935f2522e2" (UID: "1c526c7c-ad39-4172-a998-a6935f2522e2"). InnerVolumeSpecName "kube-api-access-65sgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.349205 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jjxh\" (UniqueName: \"kubernetes.io/projected/b867ef72-dc0e-475c-9368-ad959ef5c131-kube-api-access-4jjxh\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.350233 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.350352 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b867ef72-dc0e-475c-9368-ad959ef5c131-certs\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.350430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-scripts\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.350559 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.351145 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-config-data\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.351370 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.351507 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.351591 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.351656 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pp69\" (UniqueName: \"kubernetes.io/projected/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-kube-api-access-5pp69\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.351718 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c526c7c-ad39-4172-a998-a6935f2522e2-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.351784 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65sgr\" (UniqueName: \"kubernetes.io/projected/1c526c7c-ad39-4172-a998-a6935f2522e2-kube-api-access-65sgr\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.351844 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.384657 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590" (OuterVolumeSpecName: "glance") pod "1c526c7c-ad39-4172-a998-a6935f2522e2" (UID: "1c526c7c-ad39-4172-a998-a6935f2522e2"). InnerVolumeSpecName "pvc-7a322af2-a713-4577-8221-4254467d2590". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.390024 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-scripts" (OuterVolumeSpecName: "scripts") pod "9c3d3fe1-767d-429b-adcd-72bd15ff6f65" (UID: "9c3d3fe1-767d-429b-adcd-72bd15ff6f65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.396401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-kube-api-access-2hqsm" (OuterVolumeSpecName: "kube-api-access-2hqsm") pod "5fcd0f0a-6bef-42d5-a6be-7de638221c2d" (UID: "5fcd0f0a-6bef-42d5-a6be-7de638221c2d"). InnerVolumeSpecName "kube-api-access-2hqsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.426018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c526c7c-ad39-4172-a998-a6935f2522e2" (UID: "1c526c7c-ad39-4172-a998-a6935f2522e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.440450 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9c3d3fe1-767d-429b-adcd-72bd15ff6f65" (UID: "9c3d3fe1-767d-429b-adcd-72bd15ff6f65"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.456780 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.457160 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b867ef72-dc0e-475c-9368-ad959ef5c131-certs\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.457844 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-scripts\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.458399 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.458628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-config-data\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.458987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jjxh\" (UniqueName: \"kubernetes.io/projected/b867ef72-dc0e-475c-9368-ad959ef5c131-kube-api-access-4jjxh\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.459475 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") on node \"crc\" " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.459606 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.459677 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hqsm\" (UniqueName: \"kubernetes.io/projected/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-kube-api-access-2hqsm\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.459736 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.459816 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.463437 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.464534 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.467757 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-scripts\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.476403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b867ef72-dc0e-475c-9368-ad959ef5c131-config-data\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.483585 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b867ef72-dc0e-475c-9368-ad959ef5c131-certs\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.484414 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jjxh\" (UniqueName: \"kubernetes.io/projected/b867ef72-dc0e-475c-9368-ad959ef5c131-kube-api-access-4jjxh\") pod \"cloudkitty-proc-0\" (UID: \"b867ef72-dc0e-475c-9368-ad959ef5c131\") " pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.489865 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5fcd0f0a-6bef-42d5-a6be-7de638221c2d" (UID: "5fcd0f0a-6bef-42d5-a6be-7de638221c2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.527928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5fcd0f0a-6bef-42d5-a6be-7de638221c2d" (UID: "5fcd0f0a-6bef-42d5-a6be-7de638221c2d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.541319 4727 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.541490 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-config" (OuterVolumeSpecName: "config") pod "5fcd0f0a-6bef-42d5-a6be-7de638221c2d" (UID: "5fcd0f0a-6bef-42d5-a6be-7de638221c2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.542267 4727 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7a322af2-a713-4577-8221-4254467d2590" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590") on node "crc" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.556847 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1c526c7c-ad39-4172-a998-a6935f2522e2" (UID: "1c526c7c-ad39-4172-a998-a6935f2522e2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.557614 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c3d3fe1-767d-429b-adcd-72bd15ff6f65" (UID: "9c3d3fe1-767d-429b-adcd-72bd15ff6f65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.566493 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.567956 4727 reconciler_common.go:293] "Volume detached for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.567985 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.568001 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.568017 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.568035 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.568047 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.606168 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-config-data" (OuterVolumeSpecName: "config-data") pod "1c526c7c-ad39-4172-a998-a6935f2522e2" (UID: "1c526c7c-ad39-4172-a998-a6935f2522e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.606749 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fcd0f0a-6bef-42d5-a6be-7de638221c2d" (UID: "5fcd0f0a-6bef-42d5-a6be-7de638221c2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.614520 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ea5c73-8dea-4908-b688-94b7639054da" path="/var/lib/kubelet/pods/b3ea5c73-8dea-4908-b688-94b7639054da/volumes" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.642420 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5fcd0f0a-6bef-42d5-a6be-7de638221c2d" (UID: "5fcd0f0a-6bef-42d5-a6be-7de638221c2d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.664559 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-config-data" (OuterVolumeSpecName: "config-data") pod "9c3d3fe1-767d-429b-adcd-72bd15ff6f65" (UID: "9c3d3fe1-767d-429b-adcd-72bd15ff6f65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.672146 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.672215 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c526c7c-ad39-4172-a998-a6935f2522e2-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.672230 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcd0f0a-6bef-42d5-a6be-7de638221c2d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.672244 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3d3fe1-767d-429b-adcd-72bd15ff6f65-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.816952 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.875802 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-combined-ca-bundle\") pod \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.875974 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-config\") pod \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.876054 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-ovndb-tls-certs\") pod \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.876099 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-httpd-config\") pod \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.876148 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh9kk\" (UniqueName: \"kubernetes.io/projected/0acfba51-9dd2-48cb-b22d-7a59dff45f74-kube-api-access-jh9kk\") pod \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\" (UID: \"0acfba51-9dd2-48cb-b22d-7a59dff45f74\") " Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.888530 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0acfba51-9dd2-48cb-b22d-7a59dff45f74" (UID: "0acfba51-9dd2-48cb-b22d-7a59dff45f74"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.894221 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0acfba51-9dd2-48cb-b22d-7a59dff45f74-kube-api-access-jh9kk" (OuterVolumeSpecName: "kube-api-access-jh9kk") pod "0acfba51-9dd2-48cb-b22d-7a59dff45f74" (UID: "0acfba51-9dd2-48cb-b22d-7a59dff45f74"). InnerVolumeSpecName "kube-api-access-jh9kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.947859 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-config" (OuterVolumeSpecName: "config") pod "0acfba51-9dd2-48cb-b22d-7a59dff45f74" (UID: "0acfba51-9dd2-48cb-b22d-7a59dff45f74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.958968 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0acfba51-9dd2-48cb-b22d-7a59dff45f74" (UID: "0acfba51-9dd2-48cb-b22d-7a59dff45f74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.980497 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.980540 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.980551 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.980560 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh9kk\" (UniqueName: \"kubernetes.io/projected/0acfba51-9dd2-48cb-b22d-7a59dff45f74-kube-api-access-jh9kk\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:32 crc kubenswrapper[4727]: I1210 14:58:32.995727 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0acfba51-9dd2-48cb-b22d-7a59dff45f74" (UID: "0acfba51-9dd2-48cb-b22d-7a59dff45f74"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.082396 4727 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0acfba51-9dd2-48cb-b22d-7a59dff45f74-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.133727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f9b8bf876-ww2gz" event={"ID":"0acfba51-9dd2-48cb-b22d-7a59dff45f74","Type":"ContainerDied","Data":"34f77fd14ac636126e024f9473bcfad476b9aae83861361de628e93293af1bd1"} Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.133784 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.133806 4727 scope.go:117] "RemoveContainer" containerID="47fb4b5341867b21a4284760d2829c377b23bf60434a949563fe300973a2a490" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.133846 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.134132 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f9b8bf876-ww2gz" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.134607 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lr6qp" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.173812 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.306310 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.321178 4727 scope.go:117] "RemoveContainer" containerID="74aec552eb303bb4a2cc48d7317680363b863600019a3b48bb563ae33d64efc7" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.321717 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.331005 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.341234 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.365487 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:33 crc kubenswrapper[4727]: E1210 14:58:33.366113 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-httpd" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.366138 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-httpd" Dec 10 14:58:33 crc kubenswrapper[4727]: E1210 14:58:33.366153 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-api" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.366159 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-api" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.366435 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-httpd" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.366461 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" containerName="neutron-api" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.367989 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.377740 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.377740 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.378078 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f9b8bf876-ww2gz"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.412695 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7f9b8bf876-ww2gz"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.428629 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.442508 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.447020 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.451518 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.451684 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.459254 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.475677 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lr6qp"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.486226 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lr6qp"] Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.494035 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.494096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.494199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027b0dec-e154-4f38-be80-ae169e00c6a4-logs\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.494231 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.494258 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.494447 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.494508 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/027b0dec-e154-4f38-be80-ae169e00c6a4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.494635 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f22p\" (UniqueName: \"kubernetes.io/projected/027b0dec-e154-4f38-be80-ae169e00c6a4-kube-api-access-8f22p\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.596556 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027b0dec-e154-4f38-be80-ae169e00c6a4-logs\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.596634 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-config-data\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.596668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.596704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.596791 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-scripts\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.596841 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n68l8\" (UniqueName: \"kubernetes.io/projected/30db53bd-11a2-468d-9413-dcc1ab67bad5-kube-api-access-n68l8\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.596997 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-run-httpd\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/027b0dec-e154-4f38-be80-ae169e00c6a4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f22p\" (UniqueName: \"kubernetes.io/projected/027b0dec-e154-4f38-be80-ae169e00c6a4-kube-api-access-8f22p\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597257 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597295 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597324 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-log-httpd\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597427 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597709 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/027b0dec-e154-4f38-be80-ae169e00c6a4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.597775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027b0dec-e154-4f38-be80-ae169e00c6a4-logs\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.601483 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.601698 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.602752 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.602798 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a620bc24bbc5ba249cf329bc1ea3ed90fd24230212b99e061c02e82f00eaedd9/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.602846 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.602848 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027b0dec-e154-4f38-be80-ae169e00c6a4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.619218 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f22p\" (UniqueName: \"kubernetes.io/projected/027b0dec-e154-4f38-be80-ae169e00c6a4-kube-api-access-8f22p\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.643605 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7a322af2-a713-4577-8221-4254467d2590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7a322af2-a713-4577-8221-4254467d2590\") pod \"glance-default-internal-api-0\" (UID: \"027b0dec-e154-4f38-be80-ae169e00c6a4\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.699248 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-run-httpd\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.699436 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.699490 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-log-httpd\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.699515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.699599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-config-data\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.699635 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-scripts\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.699661 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n68l8\" (UniqueName: \"kubernetes.io/projected/30db53bd-11a2-468d-9413-dcc1ab67bad5-kube-api-access-n68l8\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.699884 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-run-httpd\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.700742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-log-httpd\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.703216 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.703749 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.704109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.705554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-scripts\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.707111 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-config-data\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.719956 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n68l8\" (UniqueName: \"kubernetes.io/projected/30db53bd-11a2-468d-9413-dcc1ab67bad5-kube-api-access-n68l8\") pod \"ceilometer-0\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " pod="openstack/ceilometer-0" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.748389 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.166:9292/healthcheck\": read tcp 10.217.0.2:33892->10.217.0.166:9292: read: connection reset by peer" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.748451 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9292/healthcheck\": read tcp 10.217.0.2:33894->10.217.0.166:9292: read: connection reset by peer" Dec 10 14:58:33 crc kubenswrapper[4727]: I1210 14:58:33.774923 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:34 crc kubenswrapper[4727]: I1210 14:58:34.157146 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"b867ef72-dc0e-475c-9368-ad959ef5c131","Type":"ContainerStarted","Data":"3440fe9faa4e158cfabeb240ff155c8c08d9cd48ad50d5c115dd44fd46cbd72d"} Dec 10 14:58:34 crc kubenswrapper[4727]: I1210 14:58:34.157605 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"b867ef72-dc0e-475c-9368-ad959ef5c131","Type":"ContainerStarted","Data":"196b0772aa52330c128310874b360e9b99cc1dbb07f2fbaab813a5718f57a80d"} Dec 10 14:58:34 crc kubenswrapper[4727]: I1210 14:58:34.191952 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:34 crc kubenswrapper[4727]: I1210 14:58:34.284274 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:34 crc kubenswrapper[4727]: W1210 14:58:34.323518 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod027b0dec_e154_4f38_be80_ae169e00c6a4.slice/crio-96da3bb27da650e49fe935078d5289ee9892434e09557a5e1f0ac6fc24cefc2c WatchSource:0}: Error finding container 96da3bb27da650e49fe935078d5289ee9892434e09557a5e1f0ac6fc24cefc2c: Status 404 returned error can't find the container with id 96da3bb27da650e49fe935078d5289ee9892434e09557a5e1f0ac6fc24cefc2c Dec 10 14:58:34 crc kubenswrapper[4727]: I1210 14:58:34.579388 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0acfba51-9dd2-48cb-b22d-7a59dff45f74" path="/var/lib/kubelet/pods/0acfba51-9dd2-48cb-b22d-7a59dff45f74/volumes" Dec 10 14:58:34 crc kubenswrapper[4727]: I1210 14:58:34.580686 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c526c7c-ad39-4172-a998-a6935f2522e2" path="/var/lib/kubelet/pods/1c526c7c-ad39-4172-a998-a6935f2522e2/volumes" Dec 10 14:58:34 crc kubenswrapper[4727]: I1210 14:58:34.581471 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fcd0f0a-6bef-42d5-a6be-7de638221c2d" path="/var/lib/kubelet/pods/5fcd0f0a-6bef-42d5-a6be-7de638221c2d/volumes" Dec 10 14:58:34 crc kubenswrapper[4727]: I1210 14:58:34.582823 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c3d3fe1-767d-429b-adcd-72bd15ff6f65" path="/var/lib/kubelet/pods/9c3d3fe1-767d-429b-adcd-72bd15ff6f65/volumes" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.107998 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.184888 4727 generic.go:334] "Generic (PLEG): container finished" podID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerID="3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6" exitCode=0 Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.185015 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7eab25e3-c058-4db1-b610-e5394ae0c2c1","Type":"ContainerDied","Data":"3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6"} Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.185053 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7eab25e3-c058-4db1-b610-e5394ae0c2c1","Type":"ContainerDied","Data":"b616c5970f826879b00a2c4c722b0594e74952d951ad847d044e7f083da8c380"} Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.185080 4727 scope.go:117] "RemoveContainer" containerID="3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.185266 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.191215 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerStarted","Data":"efa3b34192ab9c328e72048ad3ca6199f11f25e56abaddc00965385e26e1c02e"} Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.195868 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"027b0dec-e154-4f38-be80-ae169e00c6a4","Type":"ContainerStarted","Data":"96da3bb27da650e49fe935078d5289ee9892434e09557a5e1f0ac6fc24cefc2c"} Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.239276 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=3.239248623 podStartE2EDuration="3.239248623s" podCreationTimestamp="2025-12-10 14:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:35.223378272 +0000 UTC m=+1619.418152814" watchObservedRunningTime="2025-12-10 14:58:35.239248623 +0000 UTC m=+1619.434023165" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.240003 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-httpd-run\") pod \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.240063 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-scripts\") pod \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.240125 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srv4k\" (UniqueName: \"kubernetes.io/projected/7eab25e3-c058-4db1-b610-e5394ae0c2c1-kube-api-access-srv4k\") pod \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.240172 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-public-tls-certs\") pod \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.240230 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-combined-ca-bundle\") pod \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.240413 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-logs\") pod \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.240582 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.240608 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-config-data\") pod \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.245641 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7eab25e3-c058-4db1-b610-e5394ae0c2c1" (UID: "7eab25e3-c058-4db1-b610-e5394ae0c2c1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.246325 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-logs" (OuterVolumeSpecName: "logs") pod "7eab25e3-c058-4db1-b610-e5394ae0c2c1" (UID: "7eab25e3-c058-4db1-b610-e5394ae0c2c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.249949 4727 scope.go:117] "RemoveContainer" containerID="ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.259113 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eab25e3-c058-4db1-b610-e5394ae0c2c1-kube-api-access-srv4k" (OuterVolumeSpecName: "kube-api-access-srv4k") pod "7eab25e3-c058-4db1-b610-e5394ae0c2c1" (UID: "7eab25e3-c058-4db1-b610-e5394ae0c2c1"). InnerVolumeSpecName "kube-api-access-srv4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.263105 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-scripts" (OuterVolumeSpecName: "scripts") pod "7eab25e3-c058-4db1-b610-e5394ae0c2c1" (UID: "7eab25e3-c058-4db1-b610-e5394ae0c2c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.267319 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb" (OuterVolumeSpecName: "glance") pod "7eab25e3-c058-4db1-b610-e5394ae0c2c1" (UID: "7eab25e3-c058-4db1-b610-e5394ae0c2c1"). InnerVolumeSpecName "pvc-0790eff2-014b-4039-a134-0378a633bedb". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.356323 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-config-data" (OuterVolumeSpecName: "config-data") pod "7eab25e3-c058-4db1-b610-e5394ae0c2c1" (UID: "7eab25e3-c058-4db1-b610-e5394ae0c2c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.356868 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-config-data\") pod \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\" (UID: \"7eab25e3-c058-4db1-b610-e5394ae0c2c1\") " Dec 10 14:58:35 crc kubenswrapper[4727]: W1210 14:58:35.357065 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7eab25e3-c058-4db1-b610-e5394ae0c2c1/volumes/kubernetes.io~secret/config-data Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.357091 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-config-data" (OuterVolumeSpecName: "config-data") pod "7eab25e3-c058-4db1-b610-e5394ae0c2c1" (UID: "7eab25e3-c058-4db1-b610-e5394ae0c2c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.358010 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.358057 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") on node \"crc\" " Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.358074 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.358086 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7eab25e3-c058-4db1-b610-e5394ae0c2c1-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.358097 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.358108 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srv4k\" (UniqueName: \"kubernetes.io/projected/7eab25e3-c058-4db1-b610-e5394ae0c2c1-kube-api-access-srv4k\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.363120 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7eab25e3-c058-4db1-b610-e5394ae0c2c1" (UID: "7eab25e3-c058-4db1-b610-e5394ae0c2c1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.363308 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7eab25e3-c058-4db1-b610-e5394ae0c2c1" (UID: "7eab25e3-c058-4db1-b610-e5394ae0c2c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.418313 4727 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.418793 4727 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0790eff2-014b-4039-a134-0378a633bedb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb") on node "crc" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.460721 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.460763 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eab25e3-c058-4db1-b610-e5394ae0c2c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.460793 4727 reconciler_common.go:293] "Volume detached for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.492423 4727 scope.go:117] "RemoveContainer" containerID="3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6" Dec 10 14:58:35 crc kubenswrapper[4727]: E1210 14:58:35.492940 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6\": container with ID starting with 3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6 not found: ID does not exist" containerID="3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.492973 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6"} err="failed to get container status \"3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6\": rpc error: code = NotFound desc = could not find container \"3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6\": container with ID starting with 3d1787371df6c76ae727a45e8e2d7bb6492c4f12f5af80f95f0b2b49574447b6 not found: ID does not exist" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.492995 4727 scope.go:117] "RemoveContainer" containerID="ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a" Dec 10 14:58:35 crc kubenswrapper[4727]: E1210 14:58:35.493493 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a\": container with ID starting with ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a not found: ID does not exist" containerID="ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.493512 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a"} err="failed to get container status \"ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a\": rpc error: code = NotFound desc = could not find container \"ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a\": container with ID starting with ccc8fdbcd654ff94c8f4ffa12bb73c87d91523ea1a02dd5bbc0a7ba557e2635a not found: ID does not exist" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.524771 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.535698 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.563593 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:35 crc kubenswrapper[4727]: E1210 14:58:35.564148 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-httpd" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.564167 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-httpd" Dec 10 14:58:35 crc kubenswrapper[4727]: E1210 14:58:35.564223 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-log" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.564234 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-log" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.564540 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-log" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.564566 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" containerName="glance-httpd" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.566191 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.570279 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.570611 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.581466 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.770360 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.770539 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxgl\" (UniqueName: \"kubernetes.io/projected/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-kube-api-access-5pxgl\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.770646 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.770693 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.770740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-logs\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.770774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.770873 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.770997 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.873607 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.873732 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxgl\" (UniqueName: \"kubernetes.io/projected/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-kube-api-access-5pxgl\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.873814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.873852 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.873885 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-logs\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.873926 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.874022 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.874057 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.875077 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-logs\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.875868 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.882431 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.886664 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.889150 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.889649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.891958 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.891987 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db4de05049fa91af9a988d2a2b63e79bbb68f0fa95a3085791268e44688729c7/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.899026 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxgl\" (UniqueName: \"kubernetes.io/projected/8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e-kube-api-access-5pxgl\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4727]: I1210 14:58:35.983964 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0790eff2-014b-4039-a134-0378a633bedb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0790eff2-014b-4039-a134-0378a633bedb\") pod \"glance-default-external-api-0\" (UID: \"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:36 crc kubenswrapper[4727]: I1210 14:58:36.199002 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:36 crc kubenswrapper[4727]: I1210 14:58:36.217279 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"027b0dec-e154-4f38-be80-ae169e00c6a4","Type":"ContainerStarted","Data":"8e003592d66026030460dbb46a7e6c89fcd41e0606a73b5225dfa54866e95121"} Dec 10 14:58:36 crc kubenswrapper[4727]: I1210 14:58:36.592536 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eab25e3-c058-4db1-b610-e5394ae0c2c1" path="/var/lib/kubelet/pods/7eab25e3-c058-4db1-b610-e5394ae0c2c1/volumes" Dec 10 14:58:37 crc kubenswrapper[4727]: I1210 14:58:37.081859 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:37 crc kubenswrapper[4727]: I1210 14:58:37.255601 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e","Type":"ContainerStarted","Data":"5da324fb9bd55117d877e88f06e400d0bb9bfdfb243309e79bf4dd7fad42474f"} Dec 10 14:58:39 crc kubenswrapper[4727]: I1210 14:58:39.286381 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerStarted","Data":"207f59d80d78f4eed20197e6b1e4a0863870c801daa6a8862338eb5c4df8d4cb"} Dec 10 14:58:39 crc kubenswrapper[4727]: I1210 14:58:39.290247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e","Type":"ContainerStarted","Data":"749906c931836009d6561e6b852abada603ecc09d552515a71c26196937c6cf7"} Dec 10 14:58:39 crc kubenswrapper[4727]: I1210 14:58:39.293916 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"027b0dec-e154-4f38-be80-ae169e00c6a4","Type":"ContainerStarted","Data":"43e73e5b02fdcf0a919b6abccd3d30746f00ef54b9d204ad765f661059665da8"} Dec 10 14:58:39 crc kubenswrapper[4727]: I1210 14:58:39.327029 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.327001468 podStartE2EDuration="6.327001468s" podCreationTimestamp="2025-12-10 14:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:39.316121833 +0000 UTC m=+1623.510896375" watchObservedRunningTime="2025-12-10 14:58:39.327001468 +0000 UTC m=+1623.521776010" Dec 10 14:58:39 crc kubenswrapper[4727]: I1210 14:58:39.791395 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:40 crc kubenswrapper[4727]: I1210 14:58:40.309982 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e","Type":"ContainerStarted","Data":"40b3bd17fdb51cfb69d68db8914b16f6754285f031fd8cf3fbc4914647c3f6eb"} Dec 10 14:58:40 crc kubenswrapper[4727]: I1210 14:58:40.342813 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.342792671 podStartE2EDuration="5.342792671s" podCreationTimestamp="2025-12-10 14:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:40.34235258 +0000 UTC m=+1624.537127122" watchObservedRunningTime="2025-12-10 14:58:40.342792671 +0000 UTC m=+1624.537567213" Dec 10 14:58:41 crc kubenswrapper[4727]: I1210 14:58:41.328020 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerStarted","Data":"aef2862e411c86c45ddec889a525f7996a75e5bd3d79e912e17fefa4bc200f65"} Dec 10 14:58:42 crc kubenswrapper[4727]: I1210 14:58:42.342459 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerStarted","Data":"5bb32fb54b1446b067d8415de0a40205f830a9e7a73c5363e265c16681fa5f7e"} Dec 10 14:58:43 crc kubenswrapper[4727]: I1210 14:58:43.705491 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:43 crc kubenswrapper[4727]: I1210 14:58:43.705864 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:43 crc kubenswrapper[4727]: I1210 14:58:43.760628 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:43 crc kubenswrapper[4727]: I1210 14:58:43.777334 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.390118 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerStarted","Data":"c68c1cd8ae8a63875e0ed1a952cc6bdcb696440e77ce166e4ba547453e4ea2b3"} Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.390645 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="ceilometer-central-agent" containerID="cri-o://207f59d80d78f4eed20197e6b1e4a0863870c801daa6a8862338eb5c4df8d4cb" gracePeriod=30 Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.390780 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.391337 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="proxy-httpd" containerID="cri-o://c68c1cd8ae8a63875e0ed1a952cc6bdcb696440e77ce166e4ba547453e4ea2b3" gracePeriod=30 Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.391414 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="sg-core" containerID="cri-o://5bb32fb54b1446b067d8415de0a40205f830a9e7a73c5363e265c16681fa5f7e" gracePeriod=30 Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.391468 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="ceilometer-notification-agent" containerID="cri-o://aef2862e411c86c45ddec889a525f7996a75e5bd3d79e912e17fefa4bc200f65" gracePeriod=30 Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.391591 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.391613 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.426151 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.53512014 podStartE2EDuration="11.426126415s" podCreationTimestamp="2025-12-10 14:58:33 +0000 UTC" firstStartedPulling="2025-12-10 14:58:34.197852092 +0000 UTC m=+1618.392626644" lastFinishedPulling="2025-12-10 14:58:43.088858377 +0000 UTC m=+1627.283632919" observedRunningTime="2025-12-10 14:58:44.41445728 +0000 UTC m=+1628.609231822" watchObservedRunningTime="2025-12-10 14:58:44.426126415 +0000 UTC m=+1628.620900957" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.694625 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7l2zw"] Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.696615 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.719970 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7l2zw"] Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.851888 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd4s9\" (UniqueName: \"kubernetes.io/projected/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-kube-api-access-gd4s9\") pod \"nova-api-db-create-7l2zw\" (UID: \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\") " pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.852128 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-operator-scripts\") pod \"nova-api-db-create-7l2zw\" (UID: \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\") " pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.889395 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8f779"] Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.891289 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.906986 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-23fc-account-create-update-rlss8"] Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.908661 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.921068 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8f779"] Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.928615 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.950786 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-23fc-account-create-update-rlss8"] Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.954589 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-operator-scripts\") pod \"nova-api-db-create-7l2zw\" (UID: \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\") " pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.954734 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd4s9\" (UniqueName: \"kubernetes.io/projected/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-kube-api-access-gd4s9\") pod \"nova-api-db-create-7l2zw\" (UID: \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\") " pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.964603 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-operator-scripts\") pod \"nova-api-db-create-7l2zw\" (UID: \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\") " pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:44 crc kubenswrapper[4727]: I1210 14:58:44.980485 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd4s9\" (UniqueName: \"kubernetes.io/projected/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-kube-api-access-gd4s9\") pod \"nova-api-db-create-7l2zw\" (UID: \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\") " pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.010007 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-w96wg"] Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.011443 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.058667 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d3ade-3310-4073-843a-73b9d05651b0-operator-scripts\") pod \"nova-cell0-db-create-8f779\" (UID: \"065d3ade-3310-4073-843a-73b9d05651b0\") " pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.058729 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/046deabf-092c-4d2d-bbbf-0526f3a972bd-operator-scripts\") pod \"nova-api-23fc-account-create-update-rlss8\" (UID: \"046deabf-092c-4d2d-bbbf-0526f3a972bd\") " pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.058853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq7fs\" (UniqueName: \"kubernetes.io/projected/065d3ade-3310-4073-843a-73b9d05651b0-kube-api-access-fq7fs\") pod \"nova-cell0-db-create-8f779\" (UID: \"065d3ade-3310-4073-843a-73b9d05651b0\") " pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.063987 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6r5\" (UniqueName: \"kubernetes.io/projected/046deabf-092c-4d2d-bbbf-0526f3a972bd-kube-api-access-4r6r5\") pod \"nova-api-23fc-account-create-update-rlss8\" (UID: \"046deabf-092c-4d2d-bbbf-0526f3a972bd\") " pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.079646 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-w96wg"] Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.123812 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.152721 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f3ad-account-create-update-bw2c8"] Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.154683 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.161181 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.165670 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d3ade-3310-4073-843a-73b9d05651b0-operator-scripts\") pod \"nova-cell0-db-create-8f779\" (UID: \"065d3ade-3310-4073-843a-73b9d05651b0\") " pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.165751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6wl7\" (UniqueName: \"kubernetes.io/projected/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-kube-api-access-j6wl7\") pod \"nova-cell0-f3ad-account-create-update-bw2c8\" (UID: \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\") " pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.165784 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/046deabf-092c-4d2d-bbbf-0526f3a972bd-operator-scripts\") pod \"nova-api-23fc-account-create-update-rlss8\" (UID: \"046deabf-092c-4d2d-bbbf-0526f3a972bd\") " pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.165801 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5fpf\" (UniqueName: \"kubernetes.io/projected/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-kube-api-access-f5fpf\") pod \"nova-cell1-db-create-w96wg\" (UID: \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\") " pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.165897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq7fs\" (UniqueName: \"kubernetes.io/projected/065d3ade-3310-4073-843a-73b9d05651b0-kube-api-access-fq7fs\") pod \"nova-cell0-db-create-8f779\" (UID: \"065d3ade-3310-4073-843a-73b9d05651b0\") " pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.165946 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-operator-scripts\") pod \"nova-cell1-db-create-w96wg\" (UID: \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\") " pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.165989 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6r5\" (UniqueName: \"kubernetes.io/projected/046deabf-092c-4d2d-bbbf-0526f3a972bd-kube-api-access-4r6r5\") pod \"nova-api-23fc-account-create-update-rlss8\" (UID: \"046deabf-092c-4d2d-bbbf-0526f3a972bd\") " pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.166036 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-operator-scripts\") pod \"nova-cell0-f3ad-account-create-update-bw2c8\" (UID: \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\") " pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.167585 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/046deabf-092c-4d2d-bbbf-0526f3a972bd-operator-scripts\") pod \"nova-api-23fc-account-create-update-rlss8\" (UID: \"046deabf-092c-4d2d-bbbf-0526f3a972bd\") " pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.170097 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d3ade-3310-4073-843a-73b9d05651b0-operator-scripts\") pod \"nova-cell0-db-create-8f779\" (UID: \"065d3ade-3310-4073-843a-73b9d05651b0\") " pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.178277 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f3ad-account-create-update-bw2c8"] Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.187975 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq7fs\" (UniqueName: \"kubernetes.io/projected/065d3ade-3310-4073-843a-73b9d05651b0-kube-api-access-fq7fs\") pod \"nova-cell0-db-create-8f779\" (UID: \"065d3ade-3310-4073-843a-73b9d05651b0\") " pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.192760 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6r5\" (UniqueName: \"kubernetes.io/projected/046deabf-092c-4d2d-bbbf-0526f3a972bd-kube-api-access-4r6r5\") pod \"nova-api-23fc-account-create-update-rlss8\" (UID: \"046deabf-092c-4d2d-bbbf-0526f3a972bd\") " pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.269249 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-operator-scripts\") pod \"nova-cell1-db-create-w96wg\" (UID: \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\") " pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.269402 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-operator-scripts\") pod \"nova-cell0-f3ad-account-create-update-bw2c8\" (UID: \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\") " pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.269461 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.269514 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6wl7\" (UniqueName: \"kubernetes.io/projected/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-kube-api-access-j6wl7\") pod \"nova-cell0-f3ad-account-create-update-bw2c8\" (UID: \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\") " pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.269553 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5fpf\" (UniqueName: \"kubernetes.io/projected/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-kube-api-access-f5fpf\") pod \"nova-cell1-db-create-w96wg\" (UID: \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\") " pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.273637 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-operator-scripts\") pod \"nova-cell0-f3ad-account-create-update-bw2c8\" (UID: \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\") " pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.278328 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-operator-scripts\") pod \"nova-cell1-db-create-w96wg\" (UID: \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\") " pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.294619 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6wl7\" (UniqueName: \"kubernetes.io/projected/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-kube-api-access-j6wl7\") pod \"nova-cell0-f3ad-account-create-update-bw2c8\" (UID: \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\") " pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.297124 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b73c-account-create-update-6hb95"] Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.298578 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.298869 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b73c-account-create-update-6hb95"] Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.298934 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5fpf\" (UniqueName: \"kubernetes.io/projected/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-kube-api-access-f5fpf\") pod \"nova-cell1-db-create-w96wg\" (UID: \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\") " pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.299016 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.301584 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.446199 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.451116 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ce4240-665b-4977-93f0-e0eff335bc4f-operator-scripts\") pod \"nova-cell1-b73c-account-create-update-6hb95\" (UID: \"99ce4240-665b-4977-93f0-e0eff335bc4f\") " pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.451217 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nj7w\" (UniqueName: \"kubernetes.io/projected/99ce4240-665b-4977-93f0-e0eff335bc4f-kube-api-access-8nj7w\") pod \"nova-cell1-b73c-account-create-update-6hb95\" (UID: \"99ce4240-665b-4977-93f0-e0eff335bc4f\") " pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.494515 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d3fd12da-d7cc-49bc-b30b-346a7dd11f92","Type":"ContainerStarted","Data":"5a2f192a9122f89e30e4cf35e6d7a091b4ff62db9021acfd65994c667e109c5b"} Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.507167 4727 generic.go:334] "Generic (PLEG): container finished" podID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerID="c68c1cd8ae8a63875e0ed1a952cc6bdcb696440e77ce166e4ba547453e4ea2b3" exitCode=0 Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.507216 4727 generic.go:334] "Generic (PLEG): container finished" podID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerID="5bb32fb54b1446b067d8415de0a40205f830a9e7a73c5363e265c16681fa5f7e" exitCode=2 Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.507225 4727 generic.go:334] "Generic (PLEG): container finished" podID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerID="aef2862e411c86c45ddec889a525f7996a75e5bd3d79e912e17fefa4bc200f65" exitCode=0 Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.507233 4727 generic.go:334] "Generic (PLEG): container finished" podID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerID="207f59d80d78f4eed20197e6b1e4a0863870c801daa6a8862338eb5c4df8d4cb" exitCode=0 Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.507534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerDied","Data":"c68c1cd8ae8a63875e0ed1a952cc6bdcb696440e77ce166e4ba547453e4ea2b3"} Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.507569 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerDied","Data":"5bb32fb54b1446b067d8415de0a40205f830a9e7a73c5363e265c16681fa5f7e"} Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.507579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerDied","Data":"aef2862e411c86c45ddec889a525f7996a75e5bd3d79e912e17fefa4bc200f65"} Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.507588 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerDied","Data":"207f59d80d78f4eed20197e6b1e4a0863870c801daa6a8862338eb5c4df8d4cb"} Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.516141 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.812508341 podStartE2EDuration="37.516118544s" podCreationTimestamp="2025-12-10 14:58:08 +0000 UTC" firstStartedPulling="2025-12-10 14:58:08.993042479 +0000 UTC m=+1593.187817021" lastFinishedPulling="2025-12-10 14:58:44.696652682 +0000 UTC m=+1628.891427224" observedRunningTime="2025-12-10 14:58:45.515211621 +0000 UTC m=+1629.709986163" watchObservedRunningTime="2025-12-10 14:58:45.516118544 +0000 UTC m=+1629.710893096" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.551340 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.554181 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ce4240-665b-4977-93f0-e0eff335bc4f-operator-scripts\") pod \"nova-cell1-b73c-account-create-update-6hb95\" (UID: \"99ce4240-665b-4977-93f0-e0eff335bc4f\") " pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.554230 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nj7w\" (UniqueName: \"kubernetes.io/projected/99ce4240-665b-4977-93f0-e0eff335bc4f-kube-api-access-8nj7w\") pod \"nova-cell1-b73c-account-create-update-6hb95\" (UID: \"99ce4240-665b-4977-93f0-e0eff335bc4f\") " pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.555183 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ce4240-665b-4977-93f0-e0eff335bc4f-operator-scripts\") pod \"nova-cell1-b73c-account-create-update-6hb95\" (UID: \"99ce4240-665b-4977-93f0-e0eff335bc4f\") " pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.598076 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nj7w\" (UniqueName: \"kubernetes.io/projected/99ce4240-665b-4977-93f0-e0eff335bc4f-kube-api-access-8nj7w\") pod \"nova-cell1-b73c-account-create-update-6hb95\" (UID: \"99ce4240-665b-4977-93f0-e0eff335bc4f\") " pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:45 crc kubenswrapper[4727]: I1210 14:58:45.831669 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:45 crc kubenswrapper[4727]: E1210 14:58:45.845577 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30db53bd_11a2_468d_9413_dcc1ab67bad5.slice/crio-conmon-207f59d80d78f4eed20197e6b1e4a0863870c801daa6a8862338eb5c4df8d4cb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30db53bd_11a2_468d_9413_dcc1ab67bad5.slice/crio-207f59d80d78f4eed20197e6b1e4a0863870c801daa6a8862338eb5c4df8d4cb.scope\": RecentStats: unable to find data in memory cache]" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.200626 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.201182 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.251505 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.262506 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.322133 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.354683 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7l2zw"] Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.381424 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n68l8\" (UniqueName: \"kubernetes.io/projected/30db53bd-11a2-468d-9413-dcc1ab67bad5-kube-api-access-n68l8\") pod \"30db53bd-11a2-468d-9413-dcc1ab67bad5\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.381488 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-log-httpd\") pod \"30db53bd-11a2-468d-9413-dcc1ab67bad5\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.381589 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-run-httpd\") pod \"30db53bd-11a2-468d-9413-dcc1ab67bad5\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.381645 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-combined-ca-bundle\") pod \"30db53bd-11a2-468d-9413-dcc1ab67bad5\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.381872 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-scripts\") pod \"30db53bd-11a2-468d-9413-dcc1ab67bad5\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.381946 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-sg-core-conf-yaml\") pod \"30db53bd-11a2-468d-9413-dcc1ab67bad5\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.382006 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-config-data\") pod \"30db53bd-11a2-468d-9413-dcc1ab67bad5\" (UID: \"30db53bd-11a2-468d-9413-dcc1ab67bad5\") " Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.382734 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30db53bd-11a2-468d-9413-dcc1ab67bad5" (UID: "30db53bd-11a2-468d-9413-dcc1ab67bad5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.384209 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30db53bd-11a2-468d-9413-dcc1ab67bad5" (UID: "30db53bd-11a2-468d-9413-dcc1ab67bad5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.392222 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30db53bd-11a2-468d-9413-dcc1ab67bad5-kube-api-access-n68l8" (OuterVolumeSpecName: "kube-api-access-n68l8") pod "30db53bd-11a2-468d-9413-dcc1ab67bad5" (UID: "30db53bd-11a2-468d-9413-dcc1ab67bad5"). InnerVolumeSpecName "kube-api-access-n68l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.393381 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-scripts" (OuterVolumeSpecName: "scripts") pod "30db53bd-11a2-468d-9413-dcc1ab67bad5" (UID: "30db53bd-11a2-468d-9413-dcc1ab67bad5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.451272 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30db53bd-11a2-468d-9413-dcc1ab67bad5" (UID: "30db53bd-11a2-468d-9413-dcc1ab67bad5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.485302 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.486064 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.486201 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.486361 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n68l8\" (UniqueName: \"kubernetes.io/projected/30db53bd-11a2-468d-9413-dcc1ab67bad5-kube-api-access-n68l8\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.486445 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30db53bd-11a2-468d-9413-dcc1ab67bad5-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.542733 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30db53bd-11a2-468d-9413-dcc1ab67bad5" (UID: "30db53bd-11a2-468d-9413-dcc1ab67bad5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.544562 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30db53bd-11a2-468d-9413-dcc1ab67bad5","Type":"ContainerDied","Data":"efa3b34192ab9c328e72048ad3ca6199f11f25e56abaddc00965385e26e1c02e"} Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.544621 4727 scope.go:117] "RemoveContainer" containerID="c68c1cd8ae8a63875e0ed1a952cc6bdcb696440e77ce166e4ba547453e4ea2b3" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.544760 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.563358 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.563812 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.591179 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.625677 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-config-data" (OuterVolumeSpecName: "config-data") pod "30db53bd-11a2-468d-9413-dcc1ab67bad5" (UID: "30db53bd-11a2-468d-9413-dcc1ab67bad5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.693869 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30db53bd-11a2-468d-9413-dcc1ab67bad5-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.697436 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7l2zw" event={"ID":"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a","Type":"ContainerStarted","Data":"93accdf59e35f5a7a59d6df7bc6b8347716bbd251481d66cdf1bd10130704af4"} Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.697468 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.697483 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.742104 4727 scope.go:117] "RemoveContainer" containerID="5bb32fb54b1446b067d8415de0a40205f830a9e7a73c5363e265c16681fa5f7e" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.779225 4727 scope.go:117] "RemoveContainer" containerID="aef2862e411c86c45ddec889a525f7996a75e5bd3d79e912e17fefa4bc200f65" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.815202 4727 scope.go:117] "RemoveContainer" containerID="207f59d80d78f4eed20197e6b1e4a0863870c801daa6a8862338eb5c4df8d4cb" Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.839738 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f3ad-account-create-update-bw2c8"] Dec 10 14:58:46 crc kubenswrapper[4727]: I1210 14:58:46.985053 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.040501 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.051749 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:47 crc kubenswrapper[4727]: E1210 14:58:47.052342 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="ceilometer-central-agent" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.052368 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="ceilometer-central-agent" Dec 10 14:58:47 crc kubenswrapper[4727]: E1210 14:58:47.052392 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="ceilometer-notification-agent" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.052401 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="ceilometer-notification-agent" Dec 10 14:58:47 crc kubenswrapper[4727]: E1210 14:58:47.052438 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="sg-core" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.052447 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="sg-core" Dec 10 14:58:47 crc kubenswrapper[4727]: E1210 14:58:47.052460 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="proxy-httpd" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.052469 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="proxy-httpd" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.052727 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="ceilometer-notification-agent" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.052749 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="proxy-httpd" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.052769 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="sg-core" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.052792 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" containerName="ceilometer-central-agent" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.077537 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.093765 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.098889 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.117050 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.162177 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-23fc-account-create-update-rlss8"] Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.178159 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b73c-account-create-update-6hb95"] Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.218351 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-log-httpd\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.218452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-scripts\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.218502 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgbgn\" (UniqueName: \"kubernetes.io/projected/f865924a-598e-4980-b5db-3a9fae0b4c12-kube-api-access-bgbgn\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.218625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.218672 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-config-data\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.218708 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-run-httpd\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.218730 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.227519 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8f779"] Dec 10 14:58:47 crc kubenswrapper[4727]: W1210 14:58:47.267461 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99ce4240_665b_4977_93f0_e0eff335bc4f.slice/crio-aa7c3533837eb030a140d39fced7dc283faf0303e2a5a52bb1ce17171aaf3cee WatchSource:0}: Error finding container aa7c3533837eb030a140d39fced7dc283faf0303e2a5a52bb1ce17171aaf3cee: Status 404 returned error can't find the container with id aa7c3533837eb030a140d39fced7dc283faf0303e2a5a52bb1ce17171aaf3cee Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.283667 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-w96wg"] Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.321807 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.321918 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-config-data\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.321977 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-run-httpd\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.322000 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.322113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-log-httpd\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.322206 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-scripts\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.322271 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgbgn\" (UniqueName: \"kubernetes.io/projected/f865924a-598e-4980-b5db-3a9fae0b4c12-kube-api-access-bgbgn\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.323470 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-run-httpd\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.326681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-log-httpd\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.355231 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgbgn\" (UniqueName: \"kubernetes.io/projected/f865924a-598e-4980-b5db-3a9fae0b4c12-kube-api-access-bgbgn\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.367843 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.372027 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-config-data\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.377461 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.380350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-scripts\") pod \"ceilometer-0\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.577860 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.650686 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerID="ed471da68915250d3a2c6b07b13ed84c8d0e5b2d20d304af326cc7a4246febcf" exitCode=137 Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.650807 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"7e970c8b-0eb9-4607-9d74-559fdb8bd753","Type":"ContainerDied","Data":"ed471da68915250d3a2c6b07b13ed84c8d0e5b2d20d304af326cc7a4246febcf"} Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.654558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b73c-account-create-update-6hb95" event={"ID":"99ce4240-665b-4977-93f0-e0eff335bc4f","Type":"ContainerStarted","Data":"aa7c3533837eb030a140d39fced7dc283faf0303e2a5a52bb1ce17171aaf3cee"} Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.690691 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" event={"ID":"6af8f31d-44ee-43b6-a9c8-2a86f391d33a","Type":"ContainerStarted","Data":"0b85cec3e79ddf228a972a2f785ca2a36ccaed21807263a46dad6a989b0dd9f7"} Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.713308 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w96wg" event={"ID":"4d431f69-61e4-4dd9-8a34-95a5cbcfc083","Type":"ContainerStarted","Data":"662e2fc68025d62dc2851d21851401806e011b6ee98d483533b37b80877f2c97"} Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.723345 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8f779" event={"ID":"065d3ade-3310-4073-843a-73b9d05651b0","Type":"ContainerStarted","Data":"43f83b292de7ab6513fe9a3f20235dd23ca05f82c33d015b8daff54bcde83f14"} Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.734394 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" podStartSLOduration=2.734364828 podStartE2EDuration="2.734364828s" podCreationTimestamp="2025-12-10 14:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:47.722605421 +0000 UTC m=+1631.917379963" watchObservedRunningTime="2025-12-10 14:58:47.734364828 +0000 UTC m=+1631.929139370" Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.767340 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-23fc-account-create-update-rlss8" event={"ID":"046deabf-092c-4d2d-bbbf-0526f3a972bd","Type":"ContainerStarted","Data":"4ffc01d07a933321868f3d43974008ffeff5ca9deb101a36a1033e891ed6645c"} Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.775735 4727 generic.go:334] "Generic (PLEG): container finished" podID="75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a" containerID="4d44b3d81e380eb27da22b395fc9827ab8c5a1144de2ebff4eabcc22b3bac304" exitCode=0 Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.776857 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7l2zw" event={"ID":"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a","Type":"ContainerDied","Data":"4d44b3d81e380eb27da22b395fc9827ab8c5a1144de2ebff4eabcc22b3bac304"} Dec 10 14:58:47 crc kubenswrapper[4727]: I1210 14:58:47.837949 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-23fc-account-create-update-rlss8" podStartSLOduration=3.8378898550000002 podStartE2EDuration="3.837889855s" podCreationTimestamp="2025-12-10 14:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:47.805814534 +0000 UTC m=+1632.000589076" watchObservedRunningTime="2025-12-10 14:58:47.837889855 +0000 UTC m=+1632.032664397" Dec 10 14:58:48 crc kubenswrapper[4727]: W1210 14:58:48.385875 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf865924a_598e_4980_b5db_3a9fae0b4c12.slice/crio-2e2948ef6c91b77f1fcf3fb804a36879c2623db75d530fa806e01d1e7ef7f12b WatchSource:0}: Error finding container 2e2948ef6c91b77f1fcf3fb804a36879c2623db75d530fa806e01d1e7ef7f12b: Status 404 returned error can't find the container with id 2e2948ef6c91b77f1fcf3fb804a36879c2623db75d530fa806e01d1e7ef7f12b Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.388122 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.397248 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.488791 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-scripts\") pod \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.488852 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data\") pod \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.488939 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e970c8b-0eb9-4607-9d74-559fdb8bd753-logs\") pod \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.489007 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vxcx\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-kube-api-access-7vxcx\") pod \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.489080 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-certs\") pod \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.489247 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data-custom\") pod \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.489301 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-combined-ca-bundle\") pod \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\" (UID: \"7e970c8b-0eb9-4607-9d74-559fdb8bd753\") " Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.489882 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e970c8b-0eb9-4607-9d74-559fdb8bd753-logs" (OuterVolumeSpecName: "logs") pod "7e970c8b-0eb9-4607-9d74-559fdb8bd753" (UID: "7e970c8b-0eb9-4607-9d74-559fdb8bd753"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.496547 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7e970c8b-0eb9-4607-9d74-559fdb8bd753" (UID: "7e970c8b-0eb9-4607-9d74-559fdb8bd753"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.497422 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-certs" (OuterVolumeSpecName: "certs") pod "7e970c8b-0eb9-4607-9d74-559fdb8bd753" (UID: "7e970c8b-0eb9-4607-9d74-559fdb8bd753"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.498572 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-scripts" (OuterVolumeSpecName: "scripts") pod "7e970c8b-0eb9-4607-9d74-559fdb8bd753" (UID: "7e970c8b-0eb9-4607-9d74-559fdb8bd753"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.506837 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-kube-api-access-7vxcx" (OuterVolumeSpecName: "kube-api-access-7vxcx") pod "7e970c8b-0eb9-4607-9d74-559fdb8bd753" (UID: "7e970c8b-0eb9-4607-9d74-559fdb8bd753"). InnerVolumeSpecName "kube-api-access-7vxcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.538104 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e970c8b-0eb9-4607-9d74-559fdb8bd753" (UID: "7e970c8b-0eb9-4607-9d74-559fdb8bd753"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.558212 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data" (OuterVolumeSpecName: "config-data") pod "7e970c8b-0eb9-4607-9d74-559fdb8bd753" (UID: "7e970c8b-0eb9-4607-9d74-559fdb8bd753"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.594289 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30db53bd-11a2-468d-9413-dcc1ab67bad5" path="/var/lib/kubelet/pods/30db53bd-11a2-468d-9413-dcc1ab67bad5/volumes" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.600015 4727 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.600058 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.600074 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.600087 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.600098 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e970c8b-0eb9-4607-9d74-559fdb8bd753-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.600120 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e970c8b-0eb9-4607-9d74-559fdb8bd753-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.600131 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vxcx\" (UniqueName: \"kubernetes.io/projected/7e970c8b-0eb9-4607-9d74-559fdb8bd753-kube-api-access-7vxcx\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.693235 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.693413 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.694639 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.789882 4727 generic.go:334] "Generic (PLEG): container finished" podID="065d3ade-3310-4073-843a-73b9d05651b0" containerID="7f895b50a758fc71594c7cbd758c4d91d66a67858f59d64feee90a60bb30f8f7" exitCode=0 Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.791296 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8f779" event={"ID":"065d3ade-3310-4073-843a-73b9d05651b0","Type":"ContainerDied","Data":"7f895b50a758fc71594c7cbd758c4d91d66a67858f59d64feee90a60bb30f8f7"} Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.807759 4727 generic.go:334] "Generic (PLEG): container finished" podID="046deabf-092c-4d2d-bbbf-0526f3a972bd" containerID="acfb31b9c3146b37746f2cd3b63f47c13256c4404a149b33243dfa86771e88c7" exitCode=0 Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.808000 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-23fc-account-create-update-rlss8" event={"ID":"046deabf-092c-4d2d-bbbf-0526f3a972bd","Type":"ContainerDied","Data":"acfb31b9c3146b37746f2cd3b63f47c13256c4404a149b33243dfa86771e88c7"} Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.814769 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"7e970c8b-0eb9-4607-9d74-559fdb8bd753","Type":"ContainerDied","Data":"4876c929aadec88b28499372ae8cdd42fafbcd813a8fdbf80b5fe1855ae0a1eb"} Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.814836 4727 scope.go:117] "RemoveContainer" containerID="ed471da68915250d3a2c6b07b13ed84c8d0e5b2d20d304af326cc7a4246febcf" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.815006 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.838661 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerStarted","Data":"2e2948ef6c91b77f1fcf3fb804a36879c2623db75d530fa806e01d1e7ef7f12b"} Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.847612 4727 generic.go:334] "Generic (PLEG): container finished" podID="99ce4240-665b-4977-93f0-e0eff335bc4f" containerID="af37f655d5bc53ea2dbe49783942be71565088d3aea83d169140133012584b0a" exitCode=0 Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.847707 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b73c-account-create-update-6hb95" event={"ID":"99ce4240-665b-4977-93f0-e0eff335bc4f","Type":"ContainerDied","Data":"af37f655d5bc53ea2dbe49783942be71565088d3aea83d169140133012584b0a"} Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.863084 4727 scope.go:117] "RemoveContainer" containerID="c5d45e81f124a43c54e15b003fcfbdbbc2cb9df351b9d1eff3575afda63341f5" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.864304 4727 generic.go:334] "Generic (PLEG): container finished" podID="6af8f31d-44ee-43b6-a9c8-2a86f391d33a" containerID="a07469b98163a9585438ca68c1b83d192d29a1881a0d4acc3fc3e1c08bf5570d" exitCode=0 Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.864368 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" event={"ID":"6af8f31d-44ee-43b6-a9c8-2a86f391d33a","Type":"ContainerDied","Data":"a07469b98163a9585438ca68c1b83d192d29a1881a0d4acc3fc3e1c08bf5570d"} Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.867243 4727 generic.go:334] "Generic (PLEG): container finished" podID="4d431f69-61e4-4dd9-8a34-95a5cbcfc083" containerID="5775d709dd6393a7c9afca9314edc7eefff19f9eddc13d52b253b2c9d4e4aef3" exitCode=0 Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.869976 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.869998 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.875210 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w96wg" event={"ID":"4d431f69-61e4-4dd9-8a34-95a5cbcfc083","Type":"ContainerDied","Data":"5775d709dd6393a7c9afca9314edc7eefff19f9eddc13d52b253b2c9d4e4aef3"} Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.926732 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 14:58:48 crc kubenswrapper[4727]: I1210 14:58:48.987573 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.030280 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 14:58:49 crc kubenswrapper[4727]: E1210 14:58:49.030876 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerName="cloudkitty-api-log" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.030895 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerName="cloudkitty-api-log" Dec 10 14:58:49 crc kubenswrapper[4727]: E1210 14:58:49.030943 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerName="cloudkitty-api" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.031077 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerName="cloudkitty-api" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.031586 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerName="cloudkitty-api-log" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.031608 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerName="cloudkitty-api" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.035877 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.102411 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.102432 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.106236 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.129821 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.129928 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.129977 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4439386b-d2b2-4ac3-a0e5-07623192084c-certs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.129995 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqcgt\" (UniqueName: \"kubernetes.io/projected/4439386b-d2b2-4ac3-a0e5-07623192084c-kube-api-access-jqcgt\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.130013 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.130065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4439386b-d2b2-4ac3-a0e5-07623192084c-logs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.130129 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.130181 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-scripts\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.130241 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-config-data\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.178940 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.237962 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.239714 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.240148 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4439386b-d2b2-4ac3-a0e5-07623192084c-certs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.240169 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqcgt\" (UniqueName: \"kubernetes.io/projected/4439386b-d2b2-4ac3-a0e5-07623192084c-kube-api-access-jqcgt\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.240190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.240288 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4439386b-d2b2-4ac3-a0e5-07623192084c-logs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.240398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.240496 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-scripts\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.240618 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-config-data\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.243729 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4439386b-d2b2-4ac3-a0e5-07623192084c-logs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.261425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.262558 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-config-data\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.263087 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.263659 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4439386b-d2b2-4ac3-a0e5-07623192084c-certs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.264320 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.274815 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.279396 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4439386b-d2b2-4ac3-a0e5-07623192084c-scripts\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.303639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqcgt\" (UniqueName: \"kubernetes.io/projected/4439386b-d2b2-4ac3-a0e5-07623192084c-kube-api-access-jqcgt\") pod \"cloudkitty-api-0\" (UID: \"4439386b-d2b2-4ac3-a0e5-07623192084c\") " pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.461979 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.562806 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.649571 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd4s9\" (UniqueName: \"kubernetes.io/projected/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-kube-api-access-gd4s9\") pod \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\" (UID: \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\") " Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.649794 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-operator-scripts\") pod \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\" (UID: \"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a\") " Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.650980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a" (UID: "75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.661287 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-kube-api-access-gd4s9" (OuterVolumeSpecName: "kube-api-access-gd4s9") pod "75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a" (UID: "75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a"). InnerVolumeSpecName "kube-api-access-gd4s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.751967 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd4s9\" (UniqueName: \"kubernetes.io/projected/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-kube-api-access-gd4s9\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.752012 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.885071 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerStarted","Data":"c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648"} Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.888590 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7l2zw" event={"ID":"75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a","Type":"ContainerDied","Data":"93accdf59e35f5a7a59d6df7bc6b8347716bbd251481d66cdf1bd10130704af4"} Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.888646 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93accdf59e35f5a7a59d6df7bc6b8347716bbd251481d66cdf1bd10130704af4" Dec 10 14:58:49 crc kubenswrapper[4727]: I1210 14:58:49.889048 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7l2zw" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.067833 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.433566 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.467430 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.467587 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.603575 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d3ade-3310-4073-843a-73b9d05651b0-operator-scripts\") pod \"065d3ade-3310-4073-843a-73b9d05651b0\" (UID: \"065d3ade-3310-4073-843a-73b9d05651b0\") " Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.603863 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq7fs\" (UniqueName: \"kubernetes.io/projected/065d3ade-3310-4073-843a-73b9d05651b0-kube-api-access-fq7fs\") pod \"065d3ade-3310-4073-843a-73b9d05651b0\" (UID: \"065d3ade-3310-4073-843a-73b9d05651b0\") " Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.607646 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065d3ade-3310-4073-843a-73b9d05651b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "065d3ade-3310-4073-843a-73b9d05651b0" (UID: "065d3ade-3310-4073-843a-73b9d05651b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.616000 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065d3ade-3310-4073-843a-73b9d05651b0-kube-api-access-fq7fs" (OuterVolumeSpecName: "kube-api-access-fq7fs") pod "065d3ade-3310-4073-843a-73b9d05651b0" (UID: "065d3ade-3310-4073-843a-73b9d05651b0"). InnerVolumeSpecName "kube-api-access-fq7fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.654901 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" path="/var/lib/kubelet/pods/7e970c8b-0eb9-4607-9d74-559fdb8bd753/volumes" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.687032 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.706715 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq7fs\" (UniqueName: \"kubernetes.io/projected/065d3ade-3310-4073-843a-73b9d05651b0-kube-api-access-fq7fs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.706761 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d3ade-3310-4073-843a-73b9d05651b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.936449 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"4439386b-d2b2-4ac3-a0e5-07623192084c","Type":"ContainerStarted","Data":"0e7a18e2c9e0e9d6c640f41f8fe03d3075471fec635e8d9a5e427fe6db6ec30d"} Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.936498 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"4439386b-d2b2-4ac3-a0e5-07623192084c","Type":"ContainerStarted","Data":"8049ae67b655c9f7302bc559a9e843c16d24b4cae1c861c223be87333d247f06"} Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.955017 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerStarted","Data":"8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3"} Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.962430 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b73c-account-create-update-6hb95" event={"ID":"99ce4240-665b-4977-93f0-e0eff335bc4f","Type":"ContainerDied","Data":"aa7c3533837eb030a140d39fced7dc283faf0303e2a5a52bb1ce17171aaf3cee"} Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.962477 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa7c3533837eb030a140d39fced7dc283faf0303e2a5a52bb1ce17171aaf3cee" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.972671 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" event={"ID":"6af8f31d-44ee-43b6-a9c8-2a86f391d33a","Type":"ContainerDied","Data":"0b85cec3e79ddf228a972a2f785ca2a36ccaed21807263a46dad6a989b0dd9f7"} Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.972722 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b85cec3e79ddf228a972a2f785ca2a36ccaed21807263a46dad6a989b0dd9f7" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.978460 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8f779" event={"ID":"065d3ade-3310-4073-843a-73b9d05651b0","Type":"ContainerDied","Data":"43f83b292de7ab6513fe9a3f20235dd23ca05f82c33d015b8daff54bcde83f14"} Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.978510 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43f83b292de7ab6513fe9a3f20235dd23ca05f82c33d015b8daff54bcde83f14" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.978578 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8f779" Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.983068 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-23fc-account-create-update-rlss8" event={"ID":"046deabf-092c-4d2d-bbbf-0526f3a972bd","Type":"ContainerDied","Data":"4ffc01d07a933321868f3d43974008ffeff5ca9deb101a36a1033e891ed6645c"} Dec 10 14:58:50 crc kubenswrapper[4727]: I1210 14:58:50.983126 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffc01d07a933321868f3d43974008ffeff5ca9deb101a36a1033e891ed6645c" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.037017 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.055716 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.070338 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.085428 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.121462 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6wl7\" (UniqueName: \"kubernetes.io/projected/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-kube-api-access-j6wl7\") pod \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\" (UID: \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\") " Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.121601 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nj7w\" (UniqueName: \"kubernetes.io/projected/99ce4240-665b-4977-93f0-e0eff335bc4f-kube-api-access-8nj7w\") pod \"99ce4240-665b-4977-93f0-e0eff335bc4f\" (UID: \"99ce4240-665b-4977-93f0-e0eff335bc4f\") " Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.121905 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-operator-scripts\") pod \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\" (UID: \"6af8f31d-44ee-43b6-a9c8-2a86f391d33a\") " Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.121993 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r6r5\" (UniqueName: \"kubernetes.io/projected/046deabf-092c-4d2d-bbbf-0526f3a972bd-kube-api-access-4r6r5\") pod \"046deabf-092c-4d2d-bbbf-0526f3a972bd\" (UID: \"046deabf-092c-4d2d-bbbf-0526f3a972bd\") " Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.122099 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ce4240-665b-4977-93f0-e0eff335bc4f-operator-scripts\") pod \"99ce4240-665b-4977-93f0-e0eff335bc4f\" (UID: \"99ce4240-665b-4977-93f0-e0eff335bc4f\") " Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.122175 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/046deabf-092c-4d2d-bbbf-0526f3a972bd-operator-scripts\") pod \"046deabf-092c-4d2d-bbbf-0526f3a972bd\" (UID: \"046deabf-092c-4d2d-bbbf-0526f3a972bd\") " Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.125790 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/046deabf-092c-4d2d-bbbf-0526f3a972bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "046deabf-092c-4d2d-bbbf-0526f3a972bd" (UID: "046deabf-092c-4d2d-bbbf-0526f3a972bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.126637 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ce4240-665b-4977-93f0-e0eff335bc4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99ce4240-665b-4977-93f0-e0eff335bc4f" (UID: "99ce4240-665b-4977-93f0-e0eff335bc4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.126664 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6af8f31d-44ee-43b6-a9c8-2a86f391d33a" (UID: "6af8f31d-44ee-43b6-a9c8-2a86f391d33a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.144701 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046deabf-092c-4d2d-bbbf-0526f3a972bd-kube-api-access-4r6r5" (OuterVolumeSpecName: "kube-api-access-4r6r5") pod "046deabf-092c-4d2d-bbbf-0526f3a972bd" (UID: "046deabf-092c-4d2d-bbbf-0526f3a972bd"). InnerVolumeSpecName "kube-api-access-4r6r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.144823 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-kube-api-access-j6wl7" (OuterVolumeSpecName: "kube-api-access-j6wl7") pod "6af8f31d-44ee-43b6-a9c8-2a86f391d33a" (UID: "6af8f31d-44ee-43b6-a9c8-2a86f391d33a"). InnerVolumeSpecName "kube-api-access-j6wl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.144876 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ce4240-665b-4977-93f0-e0eff335bc4f-kube-api-access-8nj7w" (OuterVolumeSpecName: "kube-api-access-8nj7w") pod "99ce4240-665b-4977-93f0-e0eff335bc4f" (UID: "99ce4240-665b-4977-93f0-e0eff335bc4f"). InnerVolumeSpecName "kube-api-access-8nj7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.224067 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5fpf\" (UniqueName: \"kubernetes.io/projected/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-kube-api-access-f5fpf\") pod \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\" (UID: \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\") " Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.224141 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-operator-scripts\") pod \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\" (UID: \"4d431f69-61e4-4dd9-8a34-95a5cbcfc083\") " Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.224795 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6wl7\" (UniqueName: \"kubernetes.io/projected/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-kube-api-access-j6wl7\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.224820 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nj7w\" (UniqueName: \"kubernetes.io/projected/99ce4240-665b-4977-93f0-e0eff335bc4f-kube-api-access-8nj7w\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.224832 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8f31d-44ee-43b6-a9c8-2a86f391d33a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.224844 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r6r5\" (UniqueName: \"kubernetes.io/projected/046deabf-092c-4d2d-bbbf-0526f3a972bd-kube-api-access-4r6r5\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.224868 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ce4240-665b-4977-93f0-e0eff335bc4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.224887 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/046deabf-092c-4d2d-bbbf-0526f3a972bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.225442 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d431f69-61e4-4dd9-8a34-95a5cbcfc083" (UID: "4d431f69-61e4-4dd9-8a34-95a5cbcfc083"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.229138 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-kube-api-access-f5fpf" (OuterVolumeSpecName: "kube-api-access-f5fpf") pod "4d431f69-61e4-4dd9-8a34-95a5cbcfc083" (UID: "4d431f69-61e4-4dd9-8a34-95a5cbcfc083"). InnerVolumeSpecName "kube-api-access-f5fpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.326519 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5fpf\" (UniqueName: \"kubernetes.io/projected/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-kube-api-access-f5fpf\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.326568 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d431f69-61e4-4dd9-8a34-95a5cbcfc083-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.998252 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"4439386b-d2b2-4ac3-a0e5-07623192084c","Type":"ContainerStarted","Data":"d7f4de06a5abbc6616acdb5ff892d311fdc2e15e1364a7e7961752eabc75120c"} Dec 10 14:58:51 crc kubenswrapper[4727]: I1210 14:58:51.998610 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Dec 10 14:58:52 crc kubenswrapper[4727]: I1210 14:58:52.004734 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerStarted","Data":"b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0"} Dec 10 14:58:52 crc kubenswrapper[4727]: I1210 14:58:52.008178 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-23fc-account-create-update-rlss8" Dec 10 14:58:52 crc kubenswrapper[4727]: I1210 14:58:52.011294 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w96wg" Dec 10 14:58:52 crc kubenswrapper[4727]: I1210 14:58:52.011464 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w96wg" event={"ID":"4d431f69-61e4-4dd9-8a34-95a5cbcfc083","Type":"ContainerDied","Data":"662e2fc68025d62dc2851d21851401806e011b6ee98d483533b37b80877f2c97"} Dec 10 14:58:52 crc kubenswrapper[4727]: I1210 14:58:52.011525 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="662e2fc68025d62dc2851d21851401806e011b6ee98d483533b37b80877f2c97" Dec 10 14:58:52 crc kubenswrapper[4727]: I1210 14:58:52.011485 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b73c-account-create-update-6hb95" Dec 10 14:58:52 crc kubenswrapper[4727]: I1210 14:58:52.011659 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f3ad-account-create-update-bw2c8" Dec 10 14:58:52 crc kubenswrapper[4727]: I1210 14:58:52.044409 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=4.044387161 podStartE2EDuration="4.044387161s" podCreationTimestamp="2025-12-10 14:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:52.022060217 +0000 UTC m=+1636.216834779" watchObservedRunningTime="2025-12-10 14:58:52.044387161 +0000 UTC m=+1636.239161703" Dec 10 14:58:53 crc kubenswrapper[4727]: I1210 14:58:53.313007 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="7e970c8b-0eb9-4607-9d74-559fdb8bd753" containerName="cloudkitty-api" probeResult="failure" output="Get \"http://10.217.0.186:8889/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:58:53 crc kubenswrapper[4727]: I1210 14:58:53.882843 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:54 crc kubenswrapper[4727]: I1210 14:58:54.050447 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerStarted","Data":"339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe"} Dec 10 14:58:54 crc kubenswrapper[4727]: I1210 14:58:54.050609 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 14:58:54 crc kubenswrapper[4727]: I1210 14:58:54.085374 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.9493616 podStartE2EDuration="8.085350365s" podCreationTimestamp="2025-12-10 14:58:46 +0000 UTC" firstStartedPulling="2025-12-10 14:58:48.388834689 +0000 UTC m=+1632.583609231" lastFinishedPulling="2025-12-10 14:58:52.524823454 +0000 UTC m=+1636.719597996" observedRunningTime="2025-12-10 14:58:54.073752171 +0000 UTC m=+1638.268526713" watchObservedRunningTime="2025-12-10 14:58:54.085350365 +0000 UTC m=+1638.280124927" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.064648 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="ceilometer-central-agent" containerID="cri-o://c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648" gracePeriod=30 Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.066968 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="proxy-httpd" containerID="cri-o://339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe" gracePeriod=30 Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.067094 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="sg-core" containerID="cri-o://b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0" gracePeriod=30 Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.067200 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="ceilometer-notification-agent" containerID="cri-o://8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3" gracePeriod=30 Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.519239 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xkld5"] Dec 10 14:58:55 crc kubenswrapper[4727]: E1210 14:58:55.519933 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065d3ade-3310-4073-843a-73b9d05651b0" containerName="mariadb-database-create" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.519958 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="065d3ade-3310-4073-843a-73b9d05651b0" containerName="mariadb-database-create" Dec 10 14:58:55 crc kubenswrapper[4727]: E1210 14:58:55.519975 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a" containerName="mariadb-database-create" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.519982 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a" containerName="mariadb-database-create" Dec 10 14:58:55 crc kubenswrapper[4727]: E1210 14:58:55.519996 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ce4240-665b-4977-93f0-e0eff335bc4f" containerName="mariadb-account-create-update" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520003 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ce4240-665b-4977-93f0-e0eff335bc4f" containerName="mariadb-account-create-update" Dec 10 14:58:55 crc kubenswrapper[4727]: E1210 14:58:55.520026 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af8f31d-44ee-43b6-a9c8-2a86f391d33a" containerName="mariadb-account-create-update" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520033 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af8f31d-44ee-43b6-a9c8-2a86f391d33a" containerName="mariadb-account-create-update" Dec 10 14:58:55 crc kubenswrapper[4727]: E1210 14:58:55.520047 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d431f69-61e4-4dd9-8a34-95a5cbcfc083" containerName="mariadb-database-create" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520058 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d431f69-61e4-4dd9-8a34-95a5cbcfc083" containerName="mariadb-database-create" Dec 10 14:58:55 crc kubenswrapper[4727]: E1210 14:58:55.520074 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046deabf-092c-4d2d-bbbf-0526f3a972bd" containerName="mariadb-account-create-update" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520082 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="046deabf-092c-4d2d-bbbf-0526f3a972bd" containerName="mariadb-account-create-update" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520368 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="065d3ade-3310-4073-843a-73b9d05651b0" containerName="mariadb-database-create" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520392 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ce4240-665b-4977-93f0-e0eff335bc4f" containerName="mariadb-account-create-update" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520413 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d431f69-61e4-4dd9-8a34-95a5cbcfc083" containerName="mariadb-database-create" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520441 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a" containerName="mariadb-database-create" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520458 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="046deabf-092c-4d2d-bbbf-0526f3a972bd" containerName="mariadb-account-create-update" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.520473 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af8f31d-44ee-43b6-a9c8-2a86f391d33a" containerName="mariadb-account-create-update" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.521478 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.525064 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-589zt" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.528745 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.539749 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.551511 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xkld5"] Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.648052 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-scripts\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.649139 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.649322 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-config-data\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.649373 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmr9\" (UniqueName: \"kubernetes.io/projected/26cbc436-b0b3-4961-8a0e-48f797f04b5c-kube-api-access-zjmr9\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.751186 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.751336 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-config-data\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.751376 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmr9\" (UniqueName: \"kubernetes.io/projected/26cbc436-b0b3-4961-8a0e-48f797f04b5c-kube-api-access-zjmr9\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.751426 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-scripts\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.758584 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-scripts\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.758835 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.758887 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-config-data\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.775228 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmr9\" (UniqueName: \"kubernetes.io/projected/26cbc436-b0b3-4961-8a0e-48f797f04b5c-kube-api-access-zjmr9\") pod \"nova-cell0-conductor-db-sync-xkld5\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:55 crc kubenswrapper[4727]: I1210 14:58:55.846189 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:58:56 crc kubenswrapper[4727]: I1210 14:58:56.113576 4727 generic.go:334] "Generic (PLEG): container finished" podID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerID="339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe" exitCode=0 Dec 10 14:58:56 crc kubenswrapper[4727]: I1210 14:58:56.114007 4727 generic.go:334] "Generic (PLEG): container finished" podID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerID="b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0" exitCode=2 Dec 10 14:58:56 crc kubenswrapper[4727]: I1210 14:58:56.114022 4727 generic.go:334] "Generic (PLEG): container finished" podID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerID="8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3" exitCode=0 Dec 10 14:58:56 crc kubenswrapper[4727]: I1210 14:58:56.114052 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerDied","Data":"339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe"} Dec 10 14:58:56 crc kubenswrapper[4727]: I1210 14:58:56.114091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerDied","Data":"b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0"} Dec 10 14:58:56 crc kubenswrapper[4727]: I1210 14:58:56.114124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerDied","Data":"8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3"} Dec 10 14:58:56 crc kubenswrapper[4727]: I1210 14:58:56.381144 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xkld5"] Dec 10 14:58:56 crc kubenswrapper[4727]: W1210 14:58:56.383666 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26cbc436_b0b3_4961_8a0e_48f797f04b5c.slice/crio-e18913ae40e35417f878ee0c84a4b1f8685daa8ac21cf74d86db5023cf822a4d WatchSource:0}: Error finding container e18913ae40e35417f878ee0c84a4b1f8685daa8ac21cf74d86db5023cf822a4d: Status 404 returned error can't find the container with id e18913ae40e35417f878ee0c84a4b1f8685daa8ac21cf74d86db5023cf822a4d Dec 10 14:58:57 crc kubenswrapper[4727]: I1210 14:58:57.134989 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xkld5" event={"ID":"26cbc436-b0b3-4961-8a0e-48f797f04b5c","Type":"ContainerStarted","Data":"e18913ae40e35417f878ee0c84a4b1f8685daa8ac21cf74d86db5023cf822a4d"} Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.008540 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.038308 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-log-httpd\") pod \"f865924a-598e-4980-b5db-3a9fae0b4c12\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.038362 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-config-data\") pod \"f865924a-598e-4980-b5db-3a9fae0b4c12\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.038404 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-sg-core-conf-yaml\") pod \"f865924a-598e-4980-b5db-3a9fae0b4c12\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.038475 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgbgn\" (UniqueName: \"kubernetes.io/projected/f865924a-598e-4980-b5db-3a9fae0b4c12-kube-api-access-bgbgn\") pod \"f865924a-598e-4980-b5db-3a9fae0b4c12\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.038615 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-combined-ca-bundle\") pod \"f865924a-598e-4980-b5db-3a9fae0b4c12\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.038664 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-scripts\") pod \"f865924a-598e-4980-b5db-3a9fae0b4c12\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.038717 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-run-httpd\") pod \"f865924a-598e-4980-b5db-3a9fae0b4c12\" (UID: \"f865924a-598e-4980-b5db-3a9fae0b4c12\") " Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.040232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f865924a-598e-4980-b5db-3a9fae0b4c12" (UID: "f865924a-598e-4980-b5db-3a9fae0b4c12"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.040415 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f865924a-598e-4980-b5db-3a9fae0b4c12" (UID: "f865924a-598e-4980-b5db-3a9fae0b4c12"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.182052 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f865924a-598e-4980-b5db-3a9fae0b4c12-kube-api-access-bgbgn" (OuterVolumeSpecName: "kube-api-access-bgbgn") pod "f865924a-598e-4980-b5db-3a9fae0b4c12" (UID: "f865924a-598e-4980-b5db-3a9fae0b4c12"). InnerVolumeSpecName "kube-api-access-bgbgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.182521 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.182551 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgbgn\" (UniqueName: \"kubernetes.io/projected/f865924a-598e-4980-b5db-3a9fae0b4c12-kube-api-access-bgbgn\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.182563 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f865924a-598e-4980-b5db-3a9fae0b4c12-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.190297 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-scripts" (OuterVolumeSpecName: "scripts") pod "f865924a-598e-4980-b5db-3a9fae0b4c12" (UID: "f865924a-598e-4980-b5db-3a9fae0b4c12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.217770 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f865924a-598e-4980-b5db-3a9fae0b4c12" (UID: "f865924a-598e-4980-b5db-3a9fae0b4c12"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.222349 4727 generic.go:334] "Generic (PLEG): container finished" podID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerID="c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648" exitCode=0 Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.222414 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerDied","Data":"c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648"} Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.222451 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f865924a-598e-4980-b5db-3a9fae0b4c12","Type":"ContainerDied","Data":"2e2948ef6c91b77f1fcf3fb804a36879c2623db75d530fa806e01d1e7ef7f12b"} Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.222476 4727 scope.go:117] "RemoveContainer" containerID="339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.222656 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.284035 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.284066 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.301174 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f865924a-598e-4980-b5db-3a9fae0b4c12" (UID: "f865924a-598e-4980-b5db-3a9fae0b4c12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.351516 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-config-data" (OuterVolumeSpecName: "config-data") pod "f865924a-598e-4980-b5db-3a9fae0b4c12" (UID: "f865924a-598e-4980-b5db-3a9fae0b4c12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.387148 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.387184 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f865924a-598e-4980-b5db-3a9fae0b4c12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.597590 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.614656 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.648671 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:59 crc kubenswrapper[4727]: E1210 14:58:59.651009 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="sg-core" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.651048 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="sg-core" Dec 10 14:58:59 crc kubenswrapper[4727]: E1210 14:58:59.652206 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="ceilometer-central-agent" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.652235 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="ceilometer-central-agent" Dec 10 14:58:59 crc kubenswrapper[4727]: E1210 14:58:59.652248 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="proxy-httpd" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.652260 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="proxy-httpd" Dec 10 14:58:59 crc kubenswrapper[4727]: E1210 14:58:59.652299 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="ceilometer-notification-agent" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.652308 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="ceilometer-notification-agent" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.653044 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="proxy-httpd" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.653070 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="ceilometer-notification-agent" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.653114 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="ceilometer-central-agent" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.653141 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" containerName="sg-core" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.658454 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.658664 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.664110 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.664466 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.803534 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-config-data\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.803608 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-log-httpd\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.803649 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.804095 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-scripts\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.804141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-run-httpd\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.804182 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvfbp\" (UniqueName: \"kubernetes.io/projected/763ef528-c31c-450c-8bbf-b58349653667-kube-api-access-rvfbp\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.804206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.906185 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-scripts\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.906254 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-run-httpd\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.906278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvfbp\" (UniqueName: \"kubernetes.io/projected/763ef528-c31c-450c-8bbf-b58349653667-kube-api-access-rvfbp\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.906345 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.906451 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-config-data\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.906497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-log-httpd\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.906520 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.907283 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-run-httpd\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.907357 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-log-httpd\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.921023 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-scripts\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.921496 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.921675 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-config-data\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.921785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.924784 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvfbp\" (UniqueName: \"kubernetes.io/projected/763ef528-c31c-450c-8bbf-b58349653667-kube-api-access-rvfbp\") pod \"ceilometer-0\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " pod="openstack/ceilometer-0" Dec 10 14:58:59 crc kubenswrapper[4727]: I1210 14:58:59.983609 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:00 crc kubenswrapper[4727]: I1210 14:59:00.581583 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f865924a-598e-4980-b5db-3a9fae0b4c12" path="/var/lib/kubelet/pods/f865924a-598e-4980-b5db-3a9fae0b4c12/volumes" Dec 10 14:59:00 crc kubenswrapper[4727]: I1210 14:59:00.909221 4727 scope.go:117] "RemoveContainer" containerID="b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0" Dec 10 14:59:00 crc kubenswrapper[4727]: I1210 14:59:00.941261 4727 scope.go:117] "RemoveContainer" containerID="8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.062515 4727 scope.go:117] "RemoveContainer" containerID="c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.094504 4727 scope.go:117] "RemoveContainer" containerID="339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe" Dec 10 14:59:01 crc kubenswrapper[4727]: E1210 14:59:01.095129 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe\": container with ID starting with 339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe not found: ID does not exist" containerID="339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.095174 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe"} err="failed to get container status \"339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe\": rpc error: code = NotFound desc = could not find container \"339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe\": container with ID starting with 339b54523be559a096a3c8ea1b74076469bec90e38277ddf77de695320d628fe not found: ID does not exist" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.095203 4727 scope.go:117] "RemoveContainer" containerID="b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0" Dec 10 14:59:01 crc kubenswrapper[4727]: E1210 14:59:01.095599 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0\": container with ID starting with b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0 not found: ID does not exist" containerID="b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.095740 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0"} err="failed to get container status \"b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0\": rpc error: code = NotFound desc = could not find container \"b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0\": container with ID starting with b31e53301d1396ee631a0742655f8f010e7d4d17fb13926fd472265cd7d239a0 not found: ID does not exist" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.095868 4727 scope.go:117] "RemoveContainer" containerID="8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3" Dec 10 14:59:01 crc kubenswrapper[4727]: E1210 14:59:01.096649 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3\": container with ID starting with 8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3 not found: ID does not exist" containerID="8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.096694 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3"} err="failed to get container status \"8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3\": rpc error: code = NotFound desc = could not find container \"8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3\": container with ID starting with 8da1a56f78f874ecc60a6537bf71a44139e01dcc65d79b2f045c9cfc2cd7b8f3 not found: ID does not exist" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.096711 4727 scope.go:117] "RemoveContainer" containerID="c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648" Dec 10 14:59:01 crc kubenswrapper[4727]: E1210 14:59:01.097896 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648\": container with ID starting with c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648 not found: ID does not exist" containerID="c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.097976 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648"} err="failed to get container status \"c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648\": rpc error: code = NotFound desc = could not find container \"c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648\": container with ID starting with c8209fd0c1ae3747553e3e483f3b682112f1cca668ff194048c2e7444e2ab648 not found: ID does not exist" Dec 10 14:59:01 crc kubenswrapper[4727]: I1210 14:59:01.416847 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:02 crc kubenswrapper[4727]: I1210 14:59:02.261687 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerStarted","Data":"2ad808588951d7af90f5a83417225ce797b41a242e0f49ea7b4b45d6071970a3"} Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.657424 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-85zqb"] Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.661118 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.681548 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-85zqb"] Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.833062 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdt6g\" (UniqueName: \"kubernetes.io/projected/bd5ade71-a9d1-4539-a51a-1c2d458e1320-kube-api-access-jdt6g\") pod \"redhat-marketplace-85zqb\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.833116 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-utilities\") pod \"redhat-marketplace-85zqb\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.833644 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-catalog-content\") pod \"redhat-marketplace-85zqb\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.935742 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-catalog-content\") pod \"redhat-marketplace-85zqb\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.935866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdt6g\" (UniqueName: \"kubernetes.io/projected/bd5ade71-a9d1-4539-a51a-1c2d458e1320-kube-api-access-jdt6g\") pod \"redhat-marketplace-85zqb\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.935891 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-utilities\") pod \"redhat-marketplace-85zqb\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.936647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-utilities\") pod \"redhat-marketplace-85zqb\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.936981 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-catalog-content\") pod \"redhat-marketplace-85zqb\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:08 crc kubenswrapper[4727]: I1210 14:59:08.998105 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdt6g\" (UniqueName: \"kubernetes.io/projected/bd5ade71-a9d1-4539-a51a-1c2d458e1320-kube-api-access-jdt6g\") pod \"redhat-marketplace-85zqb\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:09 crc kubenswrapper[4727]: I1210 14:59:09.294547 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:09 crc kubenswrapper[4727]: I1210 14:59:09.433148 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xkld5" event={"ID":"26cbc436-b0b3-4961-8a0e-48f797f04b5c","Type":"ContainerStarted","Data":"c50ae70f186efe22dab02c813a21767ff112bf28193397173c172d84ff2eda35"} Dec 10 14:59:09 crc kubenswrapper[4727]: I1210 14:59:09.435236 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerStarted","Data":"7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152"} Dec 10 14:59:09 crc kubenswrapper[4727]: I1210 14:59:09.915335 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xkld5" podStartSLOduration=2.943727704 podStartE2EDuration="14.915314017s" podCreationTimestamp="2025-12-10 14:58:55 +0000 UTC" firstStartedPulling="2025-12-10 14:58:56.386766772 +0000 UTC m=+1640.581541314" lastFinishedPulling="2025-12-10 14:59:08.358353085 +0000 UTC m=+1652.553127627" observedRunningTime="2025-12-10 14:59:09.461280091 +0000 UTC m=+1653.656054643" watchObservedRunningTime="2025-12-10 14:59:09.915314017 +0000 UTC m=+1654.110088559" Dec 10 14:59:09 crc kubenswrapper[4727]: I1210 14:59:09.929536 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-85zqb"] Dec 10 14:59:09 crc kubenswrapper[4727]: W1210 14:59:09.945408 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd5ade71_a9d1_4539_a51a_1c2d458e1320.slice/crio-291c3255cd64071fea8ad9c0e2e465242328b8b36c27caad3aaf20e9ea8c43a9 WatchSource:0}: Error finding container 291c3255cd64071fea8ad9c0e2e465242328b8b36c27caad3aaf20e9ea8c43a9: Status 404 returned error can't find the container with id 291c3255cd64071fea8ad9c0e2e465242328b8b36c27caad3aaf20e9ea8c43a9 Dec 10 14:59:10 crc kubenswrapper[4727]: I1210 14:59:10.510752 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerStarted","Data":"a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9"} Dec 10 14:59:10 crc kubenswrapper[4727]: I1210 14:59:10.525697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85zqb" event={"ID":"bd5ade71-a9d1-4539-a51a-1c2d458e1320","Type":"ContainerStarted","Data":"291c3255cd64071fea8ad9c0e2e465242328b8b36c27caad3aaf20e9ea8c43a9"} Dec 10 14:59:11 crc kubenswrapper[4727]: I1210 14:59:11.541002 4727 generic.go:334] "Generic (PLEG): container finished" podID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerID="40796530a8e3638b7ccf6ce749c004fc8328d12bc09407b14d6201ee2a00c0e0" exitCode=0 Dec 10 14:59:11 crc kubenswrapper[4727]: I1210 14:59:11.541115 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85zqb" event={"ID":"bd5ade71-a9d1-4539-a51a-1c2d458e1320","Type":"ContainerDied","Data":"40796530a8e3638b7ccf6ce749c004fc8328d12bc09407b14d6201ee2a00c0e0"} Dec 10 14:59:11 crc kubenswrapper[4727]: I1210 14:59:11.544467 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerStarted","Data":"7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6"} Dec 10 14:59:13 crc kubenswrapper[4727]: I1210 14:59:13.636123 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.356179 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6gtb8"] Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.358735 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.373316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gtb8"] Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.521309 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e79a4b-ad5b-4dc0-ab86-650c80fb76b7-catalog-content\") pod \"certified-operators-6gtb8\" (UID: \"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7\") " pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.521370 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mwr\" (UniqueName: \"kubernetes.io/projected/01e79a4b-ad5b-4dc0-ab86-650c80fb76b7-kube-api-access-w4mwr\") pod \"certified-operators-6gtb8\" (UID: \"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7\") " pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.521587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e79a4b-ad5b-4dc0-ab86-650c80fb76b7-utilities\") pod \"certified-operators-6gtb8\" (UID: \"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7\") " pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.624858 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e79a4b-ad5b-4dc0-ab86-650c80fb76b7-catalog-content\") pod \"certified-operators-6gtb8\" (UID: \"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7\") " pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.624926 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mwr\" (UniqueName: \"kubernetes.io/projected/01e79a4b-ad5b-4dc0-ab86-650c80fb76b7-kube-api-access-w4mwr\") pod \"certified-operators-6gtb8\" (UID: \"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7\") " pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.625045 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e79a4b-ad5b-4dc0-ab86-650c80fb76b7-utilities\") pod \"certified-operators-6gtb8\" (UID: \"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7\") " pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.625566 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e79a4b-ad5b-4dc0-ab86-650c80fb76b7-catalog-content\") pod \"certified-operators-6gtb8\" (UID: \"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7\") " pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.625716 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e79a4b-ad5b-4dc0-ab86-650c80fb76b7-utilities\") pod \"certified-operators-6gtb8\" (UID: \"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7\") " pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.651464 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mwr\" (UniqueName: \"kubernetes.io/projected/01e79a4b-ad5b-4dc0-ab86-650c80fb76b7-kube-api-access-w4mwr\") pod \"certified-operators-6gtb8\" (UID: \"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7\") " pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:15 crc kubenswrapper[4727]: I1210 14:59:15.685784 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:16 crc kubenswrapper[4727]: I1210 14:59:16.800255 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gtb8"] Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.622758 4727 generic.go:334] "Generic (PLEG): container finished" podID="01e79a4b-ad5b-4dc0-ab86-650c80fb76b7" containerID="299a68a374eab9838f1774fc62ca7196d6a30872142f72dd90ede01307106a24" exitCode=0 Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.623149 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gtb8" event={"ID":"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7","Type":"ContainerDied","Data":"299a68a374eab9838f1774fc62ca7196d6a30872142f72dd90ede01307106a24"} Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.623246 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gtb8" event={"ID":"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7","Type":"ContainerStarted","Data":"6eeb8c67932c4137dbefc3f2bde16bf3c5cae183f6a7d762044130885d0d6e36"} Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.627644 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerStarted","Data":"54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2"} Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.627846 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="ceilometer-central-agent" containerID="cri-o://7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152" gracePeriod=30 Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.627942 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.627979 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="proxy-httpd" containerID="cri-o://54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2" gracePeriod=30 Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.628022 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="sg-core" containerID="cri-o://7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6" gracePeriod=30 Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.628056 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="ceilometer-notification-agent" containerID="cri-o://a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9" gracePeriod=30 Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.632752 4727 generic.go:334] "Generic (PLEG): container finished" podID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerID="dcf4c5411084ccff39bac3fb10f826389753a246e7a832d35616815fb85a644e" exitCode=0 Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.632802 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85zqb" event={"ID":"bd5ade71-a9d1-4539-a51a-1c2d458e1320","Type":"ContainerDied","Data":"dcf4c5411084ccff39bac3fb10f826389753a246e7a832d35616815fb85a644e"} Dec 10 14:59:17 crc kubenswrapper[4727]: I1210 14:59:17.743462 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.102634381 podStartE2EDuration="18.743434777s" podCreationTimestamp="2025-12-10 14:58:59 +0000 UTC" firstStartedPulling="2025-12-10 14:59:01.423353368 +0000 UTC m=+1645.618127910" lastFinishedPulling="2025-12-10 14:59:16.064153764 +0000 UTC m=+1660.258928306" observedRunningTime="2025-12-10 14:59:17.718237691 +0000 UTC m=+1661.913012243" watchObservedRunningTime="2025-12-10 14:59:17.743434777 +0000 UTC m=+1661.938209329" Dec 10 14:59:18 crc kubenswrapper[4727]: I1210 14:59:18.645784 4727 generic.go:334] "Generic (PLEG): container finished" podID="763ef528-c31c-450c-8bbf-b58349653667" containerID="54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2" exitCode=0 Dec 10 14:59:18 crc kubenswrapper[4727]: I1210 14:59:18.647232 4727 generic.go:334] "Generic (PLEG): container finished" podID="763ef528-c31c-450c-8bbf-b58349653667" containerID="7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6" exitCode=2 Dec 10 14:59:18 crc kubenswrapper[4727]: I1210 14:59:18.647340 4727 generic.go:334] "Generic (PLEG): container finished" podID="763ef528-c31c-450c-8bbf-b58349653667" containerID="a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9" exitCode=0 Dec 10 14:59:18 crc kubenswrapper[4727]: I1210 14:59:18.645885 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerDied","Data":"54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2"} Dec 10 14:59:18 crc kubenswrapper[4727]: I1210 14:59:18.647578 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerDied","Data":"7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6"} Dec 10 14:59:18 crc kubenswrapper[4727]: I1210 14:59:18.647682 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerDied","Data":"a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9"} Dec 10 14:59:19 crc kubenswrapper[4727]: I1210 14:59:19.662399 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85zqb" event={"ID":"bd5ade71-a9d1-4539-a51a-1c2d458e1320","Type":"ContainerStarted","Data":"dc4fdb9cd1acdaccc63a19c4c6f66fe12c667ba97241ec6edba550bb5cf4ccd7"} Dec 10 14:59:19 crc kubenswrapper[4727]: I1210 14:59:19.693874 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-85zqb" podStartSLOduration=4.813169488 podStartE2EDuration="11.693853592s" podCreationTimestamp="2025-12-10 14:59:08 +0000 UTC" firstStartedPulling="2025-12-10 14:59:11.543422027 +0000 UTC m=+1655.738196569" lastFinishedPulling="2025-12-10 14:59:18.424106121 +0000 UTC m=+1662.618880673" observedRunningTime="2025-12-10 14:59:19.685667286 +0000 UTC m=+1663.880441828" watchObservedRunningTime="2025-12-10 14:59:19.693853592 +0000 UTC m=+1663.888628134" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.570790 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.622077 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-combined-ca-bundle\") pod \"763ef528-c31c-450c-8bbf-b58349653667\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.623096 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-scripts\") pod \"763ef528-c31c-450c-8bbf-b58349653667\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.623273 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvfbp\" (UniqueName: \"kubernetes.io/projected/763ef528-c31c-450c-8bbf-b58349653667-kube-api-access-rvfbp\") pod \"763ef528-c31c-450c-8bbf-b58349653667\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.623330 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-log-httpd\") pod \"763ef528-c31c-450c-8bbf-b58349653667\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.623400 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-config-data\") pod \"763ef528-c31c-450c-8bbf-b58349653667\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.623489 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-sg-core-conf-yaml\") pod \"763ef528-c31c-450c-8bbf-b58349653667\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.623530 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-run-httpd\") pod \"763ef528-c31c-450c-8bbf-b58349653667\" (UID: \"763ef528-c31c-450c-8bbf-b58349653667\") " Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.624058 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "763ef528-c31c-450c-8bbf-b58349653667" (UID: "763ef528-c31c-450c-8bbf-b58349653667"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.623980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "763ef528-c31c-450c-8bbf-b58349653667" (UID: "763ef528-c31c-450c-8bbf-b58349653667"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.624721 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.624743 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/763ef528-c31c-450c-8bbf-b58349653667-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.645487 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-scripts" (OuterVolumeSpecName: "scripts") pod "763ef528-c31c-450c-8bbf-b58349653667" (UID: "763ef528-c31c-450c-8bbf-b58349653667"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.652174 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763ef528-c31c-450c-8bbf-b58349653667-kube-api-access-rvfbp" (OuterVolumeSpecName: "kube-api-access-rvfbp") pod "763ef528-c31c-450c-8bbf-b58349653667" (UID: "763ef528-c31c-450c-8bbf-b58349653667"). InnerVolumeSpecName "kube-api-access-rvfbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.708666 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "763ef528-c31c-450c-8bbf-b58349653667" (UID: "763ef528-c31c-450c-8bbf-b58349653667"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.727267 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.727314 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvfbp\" (UniqueName: \"kubernetes.io/projected/763ef528-c31c-450c-8bbf-b58349653667-kube-api-access-rvfbp\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.727351 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.772357 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "763ef528-c31c-450c-8bbf-b58349653667" (UID: "763ef528-c31c-450c-8bbf-b58349653667"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.775765 4727 generic.go:334] "Generic (PLEG): container finished" podID="763ef528-c31c-450c-8bbf-b58349653667" containerID="7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152" exitCode=0 Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.775888 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.775931 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerDied","Data":"7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152"} Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.775971 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"763ef528-c31c-450c-8bbf-b58349653667","Type":"ContainerDied","Data":"2ad808588951d7af90f5a83417225ce797b41a242e0f49ea7b4b45d6071970a3"} Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.775992 4727 scope.go:117] "RemoveContainer" containerID="54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.783644 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gtb8" event={"ID":"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7","Type":"ContainerStarted","Data":"e88d10c7651ebe8a7183ae0aec5c05218a2e1be0a8de767a56ac40e047b7bd49"} Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.805592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-config-data" (OuterVolumeSpecName: "config-data") pod "763ef528-c31c-450c-8bbf-b58349653667" (UID: "763ef528-c31c-450c-8bbf-b58349653667"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.829947 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.829991 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763ef528-c31c-450c-8bbf-b58349653667-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.847108 4727 scope.go:117] "RemoveContainer" containerID="7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.889184 4727 scope.go:117] "RemoveContainer" containerID="a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.922278 4727 scope.go:117] "RemoveContainer" containerID="7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.963942 4727 scope.go:117] "RemoveContainer" containerID="54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2" Dec 10 14:59:26 crc kubenswrapper[4727]: E1210 14:59:26.964471 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2\": container with ID starting with 54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2 not found: ID does not exist" containerID="54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.964503 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2"} err="failed to get container status \"54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2\": rpc error: code = NotFound desc = could not find container \"54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2\": container with ID starting with 54ee35f3746a10d1c8ca382bf90c11f65b65d1a5adceba8ffd3bb731853282c2 not found: ID does not exist" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.964536 4727 scope.go:117] "RemoveContainer" containerID="7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6" Dec 10 14:59:26 crc kubenswrapper[4727]: E1210 14:59:26.964781 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6\": container with ID starting with 7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6 not found: ID does not exist" containerID="7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.964809 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6"} err="failed to get container status \"7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6\": rpc error: code = NotFound desc = could not find container \"7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6\": container with ID starting with 7075ea6f336665ace597b9f9a644ef9241a2bdac4c54e30cca452bf127642aa6 not found: ID does not exist" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.964832 4727 scope.go:117] "RemoveContainer" containerID="a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9" Dec 10 14:59:26 crc kubenswrapper[4727]: E1210 14:59:26.965105 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9\": container with ID starting with a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9 not found: ID does not exist" containerID="a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.965129 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9"} err="failed to get container status \"a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9\": rpc error: code = NotFound desc = could not find container \"a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9\": container with ID starting with a0eafac0ce104b19b0fea6f6a9ec1b9bfa7adca2ff7edd6f1d9e9a8a5ffaafd9 not found: ID does not exist" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.965147 4727 scope.go:117] "RemoveContainer" containerID="7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152" Dec 10 14:59:26 crc kubenswrapper[4727]: E1210 14:59:26.965949 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152\": container with ID starting with 7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152 not found: ID does not exist" containerID="7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152" Dec 10 14:59:26 crc kubenswrapper[4727]: I1210 14:59:26.965968 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152"} err="failed to get container status \"7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152\": rpc error: code = NotFound desc = could not find container \"7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152\": container with ID starting with 7624f53e71e872ff52770975a3ecf1d0b3160821d998d711c70e15fd0d359152 not found: ID does not exist" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.143608 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.158923 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.182570 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:27 crc kubenswrapper[4727]: E1210 14:59:27.183363 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="proxy-httpd" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.183403 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="proxy-httpd" Dec 10 14:59:27 crc kubenswrapper[4727]: E1210 14:59:27.183418 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="ceilometer-notification-agent" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.183426 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="ceilometer-notification-agent" Dec 10 14:59:27 crc kubenswrapper[4727]: E1210 14:59:27.183492 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="ceilometer-central-agent" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.183501 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="ceilometer-central-agent" Dec 10 14:59:27 crc kubenswrapper[4727]: E1210 14:59:27.183510 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="sg-core" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.183518 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="sg-core" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.186086 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="ceilometer-central-agent" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.186112 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="proxy-httpd" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.186158 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="ceilometer-notification-agent" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.186184 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="763ef528-c31c-450c-8bbf-b58349653667" containerName="sg-core" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.189094 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.196288 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.215824 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.216666 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.237963 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.238118 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvrdp\" (UniqueName: \"kubernetes.io/projected/4124e168-5d24-4eb5-aada-4add345eb9e2-kube-api-access-gvrdp\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.238179 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.238305 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-log-httpd\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.238409 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-config-data\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.238536 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-run-httpd\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.238870 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-scripts\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.339964 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-run-httpd\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.340593 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-scripts\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.340661 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.340709 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvrdp\" (UniqueName: \"kubernetes.io/projected/4124e168-5d24-4eb5-aada-4add345eb9e2-kube-api-access-gvrdp\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.340730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.340776 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-config-data\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.340791 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-log-httpd\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.341674 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-log-httpd\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.342508 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-run-httpd\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.346633 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-scripts\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.348225 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.348695 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-config-data\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.350634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:27 crc kubenswrapper[4727]: I1210 14:59:27.360389 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvrdp\" (UniqueName: \"kubernetes.io/projected/4124e168-5d24-4eb5-aada-4add345eb9e2-kube-api-access-gvrdp\") pod \"ceilometer-0\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " pod="openstack/ceilometer-0" Dec 10 14:59:28 crc kubenswrapper[4727]: I1210 14:59:28.255997 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:28 crc kubenswrapper[4727]: I1210 14:59:28.282261 4727 generic.go:334] "Generic (PLEG): container finished" podID="01e79a4b-ad5b-4dc0-ab86-650c80fb76b7" containerID="e88d10c7651ebe8a7183ae0aec5c05218a2e1be0a8de767a56ac40e047b7bd49" exitCode=0 Dec 10 14:59:28 crc kubenswrapper[4727]: I1210 14:59:28.284202 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gtb8" event={"ID":"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7","Type":"ContainerDied","Data":"e88d10c7651ebe8a7183ae0aec5c05218a2e1be0a8de767a56ac40e047b7bd49"} Dec 10 14:59:28 crc kubenswrapper[4727]: I1210 14:59:28.580646 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="763ef528-c31c-450c-8bbf-b58349653667" path="/var/lib/kubelet/pods/763ef528-c31c-450c-8bbf-b58349653667/volumes" Dec 10 14:59:28 crc kubenswrapper[4727]: I1210 14:59:28.865601 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:28 crc kubenswrapper[4727]: W1210 14:59:28.874415 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4124e168_5d24_4eb5_aada_4add345eb9e2.slice/crio-69270a344394b0cd692b726e6c13e1c2d62807ae294cd2acba8d86f1a732c184 WatchSource:0}: Error finding container 69270a344394b0cd692b726e6c13e1c2d62807ae294cd2acba8d86f1a732c184: Status 404 returned error can't find the container with id 69270a344394b0cd692b726e6c13e1c2d62807ae294cd2acba8d86f1a732c184 Dec 10 14:59:29 crc kubenswrapper[4727]: I1210 14:59:29.295217 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:29 crc kubenswrapper[4727]: I1210 14:59:29.296605 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:29 crc kubenswrapper[4727]: I1210 14:59:29.332044 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerStarted","Data":"69270a344394b0cd692b726e6c13e1c2d62807ae294cd2acba8d86f1a732c184"} Dec 10 14:59:29 crc kubenswrapper[4727]: I1210 14:59:29.405246 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:29 crc kubenswrapper[4727]: I1210 14:59:29.485346 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="4439386b-d2b2-4ac3-a0e5-07623192084c" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.199:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:29 crc kubenswrapper[4727]: I1210 14:59:29.485815 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="4439386b-d2b2-4ac3-a0e5-07623192084c" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.199:8889/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:30 crc kubenswrapper[4727]: I1210 14:59:30.417483 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:30 crc kubenswrapper[4727]: I1210 14:59:30.491436 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-85zqb"] Dec 10 14:59:32 crc kubenswrapper[4727]: I1210 14:59:32.363664 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-85zqb" podUID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerName="registry-server" containerID="cri-o://dc4fdb9cd1acdaccc63a19c4c6f66fe12c667ba97241ec6edba550bb5cf4ccd7" gracePeriod=2 Dec 10 14:59:33 crc kubenswrapper[4727]: I1210 14:59:33.377476 4727 generic.go:334] "Generic (PLEG): container finished" podID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerID="dc4fdb9cd1acdaccc63a19c4c6f66fe12c667ba97241ec6edba550bb5cf4ccd7" exitCode=0 Dec 10 14:59:33 crc kubenswrapper[4727]: I1210 14:59:33.377571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85zqb" event={"ID":"bd5ade71-a9d1-4539-a51a-1c2d458e1320","Type":"ContainerDied","Data":"dc4fdb9cd1acdaccc63a19c4c6f66fe12c667ba97241ec6edba550bb5cf4ccd7"} Dec 10 14:59:33 crc kubenswrapper[4727]: I1210 14:59:33.381231 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gtb8" event={"ID":"01e79a4b-ad5b-4dc0-ab86-650c80fb76b7","Type":"ContainerStarted","Data":"b3663472a62cf765aa0b1aa3cfa682f498f445ba5d88c23b353961c048641f49"} Dec 10 14:59:33 crc kubenswrapper[4727]: I1210 14:59:33.386490 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.442654 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6gtb8" podStartSLOduration=6.15964748 podStartE2EDuration="20.442626312s" podCreationTimestamp="2025-12-10 14:59:15 +0000 UTC" firstStartedPulling="2025-12-10 14:59:17.62482886 +0000 UTC m=+1661.819603402" lastFinishedPulling="2025-12-10 14:59:31.907807692 +0000 UTC m=+1676.102582234" observedRunningTime="2025-12-10 14:59:35.435482262 +0000 UTC m=+1679.630256814" watchObservedRunningTime="2025-12-10 14:59:35.442626312 +0000 UTC m=+1679.637400854" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.626554 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.688755 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.689072 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.746866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-utilities\") pod \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.747141 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-catalog-content\") pod \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.747253 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdt6g\" (UniqueName: \"kubernetes.io/projected/bd5ade71-a9d1-4539-a51a-1c2d458e1320-kube-api-access-jdt6g\") pod \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\" (UID: \"bd5ade71-a9d1-4539-a51a-1c2d458e1320\") " Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.748235 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-utilities" (OuterVolumeSpecName: "utilities") pod "bd5ade71-a9d1-4539-a51a-1c2d458e1320" (UID: "bd5ade71-a9d1-4539-a51a-1c2d458e1320"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.750359 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.755266 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5ade71-a9d1-4539-a51a-1c2d458e1320-kube-api-access-jdt6g" (OuterVolumeSpecName: "kube-api-access-jdt6g") pod "bd5ade71-a9d1-4539-a51a-1c2d458e1320" (UID: "bd5ade71-a9d1-4539-a51a-1c2d458e1320"). InnerVolumeSpecName "kube-api-access-jdt6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.765361 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd5ade71-a9d1-4539-a51a-1c2d458e1320" (UID: "bd5ade71-a9d1-4539-a51a-1c2d458e1320"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.849550 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.849582 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdt6g\" (UniqueName: \"kubernetes.io/projected/bd5ade71-a9d1-4539-a51a-1c2d458e1320-kube-api-access-jdt6g\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:35 crc kubenswrapper[4727]: I1210 14:59:35.849595 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5ade71-a9d1-4539-a51a-1c2d458e1320-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:36 crc kubenswrapper[4727]: I1210 14:59:36.424804 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85zqb" event={"ID":"bd5ade71-a9d1-4539-a51a-1c2d458e1320","Type":"ContainerDied","Data":"291c3255cd64071fea8ad9c0e2e465242328b8b36c27caad3aaf20e9ea8c43a9"} Dec 10 14:59:36 crc kubenswrapper[4727]: I1210 14:59:36.424880 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85zqb" Dec 10 14:59:36 crc kubenswrapper[4727]: I1210 14:59:36.424886 4727 scope.go:117] "RemoveContainer" containerID="dc4fdb9cd1acdaccc63a19c4c6f66fe12c667ba97241ec6edba550bb5cf4ccd7" Dec 10 14:59:36 crc kubenswrapper[4727]: I1210 14:59:36.426203 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerStarted","Data":"526ee02f6b6d318bc473490ef5c69590b95e08d6d691aff202d55b00ceea77af"} Dec 10 14:59:36 crc kubenswrapper[4727]: I1210 14:59:36.477137 4727 scope.go:117] "RemoveContainer" containerID="dcf4c5411084ccff39bac3fb10f826389753a246e7a832d35616815fb85a644e" Dec 10 14:59:36 crc kubenswrapper[4727]: I1210 14:59:36.479851 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-85zqb"] Dec 10 14:59:36 crc kubenswrapper[4727]: I1210 14:59:36.500636 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-85zqb"] Dec 10 14:59:36 crc kubenswrapper[4727]: I1210 14:59:36.539104 4727 scope.go:117] "RemoveContainer" containerID="40796530a8e3638b7ccf6ce749c004fc8328d12bc09407b14d6201ee2a00c0e0" Dec 10 14:59:36 crc kubenswrapper[4727]: I1210 14:59:36.604740 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" path="/var/lib/kubelet/pods/bd5ade71-a9d1-4539-a51a-1c2d458e1320/volumes" Dec 10 14:59:37 crc kubenswrapper[4727]: I1210 14:59:37.443798 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerStarted","Data":"b8f89e796222aa3865301b7fe9d57831de93d10e06621cbfbad281b496d58c6c"} Dec 10 14:59:37 crc kubenswrapper[4727]: I1210 14:59:37.512545 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6gtb8" Dec 10 14:59:37 crc kubenswrapper[4727]: I1210 14:59:37.724661 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:59:37 crc kubenswrapper[4727]: I1210 14:59:37.725075 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:59:38 crc kubenswrapper[4727]: I1210 14:59:38.484401 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerStarted","Data":"5eaacda7a4edd18755cb2532a9120e0fdb6e76b5671180b3d1a0ce3bc2d7d124"} Dec 10 14:59:39 crc kubenswrapper[4727]: I1210 14:59:39.483149 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gtb8"] Dec 10 14:59:39 crc kubenswrapper[4727]: I1210 14:59:39.510970 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerStarted","Data":"976415abce4149cef2f2bf6c4d947647a9154404f4b53ee7f2b40b5bfcef535e"} Dec 10 14:59:39 crc kubenswrapper[4727]: I1210 14:59:39.511040 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 14:59:39 crc kubenswrapper[4727]: I1210 14:59:39.566234 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.399112257 podStartE2EDuration="12.566210204s" podCreationTimestamp="2025-12-10 14:59:27 +0000 UTC" firstStartedPulling="2025-12-10 14:59:28.876302503 +0000 UTC m=+1673.071077045" lastFinishedPulling="2025-12-10 14:59:39.04340044 +0000 UTC m=+1683.238174992" observedRunningTime="2025-12-10 14:59:39.557686198 +0000 UTC m=+1683.752460760" watchObservedRunningTime="2025-12-10 14:59:39.566210204 +0000 UTC m=+1683.760984746" Dec 10 14:59:39 crc kubenswrapper[4727]: I1210 14:59:39.853339 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zf42t"] Dec 10 14:59:39 crc kubenswrapper[4727]: I1210 14:59:39.853946 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zf42t" podUID="8089752a-539b-4f05-81d6-ef383b753227" containerName="registry-server" containerID="cri-o://101502e78d036f6fa46932b317e3f33e89e5d40f792629350a02eeffa7c68f2f" gracePeriod=2 Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.522882 4727 generic.go:334] "Generic (PLEG): container finished" podID="8089752a-539b-4f05-81d6-ef383b753227" containerID="101502e78d036f6fa46932b317e3f33e89e5d40f792629350a02eeffa7c68f2f" exitCode=0 Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.522969 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf42t" event={"ID":"8089752a-539b-4f05-81d6-ef383b753227","Type":"ContainerDied","Data":"101502e78d036f6fa46932b317e3f33e89e5d40f792629350a02eeffa7c68f2f"} Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.523254 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf42t" event={"ID":"8089752a-539b-4f05-81d6-ef383b753227","Type":"ContainerDied","Data":"2b6c02d9eea69f5e3e0c2559c00c2afe2269e80f3283c82d8a6395c4cab72862"} Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.523276 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b6c02d9eea69f5e3e0c2559c00c2afe2269e80f3283c82d8a6395c4cab72862" Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.550271 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.684564 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-catalog-content\") pod \"8089752a-539b-4f05-81d6-ef383b753227\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.684694 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-utilities\") pod \"8089752a-539b-4f05-81d6-ef383b753227\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.684734 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glmnf\" (UniqueName: \"kubernetes.io/projected/8089752a-539b-4f05-81d6-ef383b753227-kube-api-access-glmnf\") pod \"8089752a-539b-4f05-81d6-ef383b753227\" (UID: \"8089752a-539b-4f05-81d6-ef383b753227\") " Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.685255 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-utilities" (OuterVolumeSpecName: "utilities") pod "8089752a-539b-4f05-81d6-ef383b753227" (UID: "8089752a-539b-4f05-81d6-ef383b753227"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.686635 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.708632 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8089752a-539b-4f05-81d6-ef383b753227-kube-api-access-glmnf" (OuterVolumeSpecName: "kube-api-access-glmnf") pod "8089752a-539b-4f05-81d6-ef383b753227" (UID: "8089752a-539b-4f05-81d6-ef383b753227"). InnerVolumeSpecName "kube-api-access-glmnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.788612 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glmnf\" (UniqueName: \"kubernetes.io/projected/8089752a-539b-4f05-81d6-ef383b753227-kube-api-access-glmnf\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.793372 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8089752a-539b-4f05-81d6-ef383b753227" (UID: "8089752a-539b-4f05-81d6-ef383b753227"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:40 crc kubenswrapper[4727]: I1210 14:59:40.890452 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8089752a-539b-4f05-81d6-ef383b753227-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:41 crc kubenswrapper[4727]: I1210 14:59:41.535009 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf42t" Dec 10 14:59:41 crc kubenswrapper[4727]: I1210 14:59:41.573882 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zf42t"] Dec 10 14:59:41 crc kubenswrapper[4727]: I1210 14:59:41.593986 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zf42t"] Dec 10 14:59:42 crc kubenswrapper[4727]: I1210 14:59:42.578206 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8089752a-539b-4f05-81d6-ef383b753227" path="/var/lib/kubelet/pods/8089752a-539b-4f05-81d6-ef383b753227/volumes" Dec 10 14:59:44 crc kubenswrapper[4727]: I1210 14:59:44.579583 4727 generic.go:334] "Generic (PLEG): container finished" podID="26cbc436-b0b3-4961-8a0e-48f797f04b5c" containerID="c50ae70f186efe22dab02c813a21767ff112bf28193397173c172d84ff2eda35" exitCode=0 Dec 10 14:59:44 crc kubenswrapper[4727]: I1210 14:59:44.579931 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xkld5" event={"ID":"26cbc436-b0b3-4961-8a0e-48f797f04b5c","Type":"ContainerDied","Data":"c50ae70f186efe22dab02c813a21767ff112bf28193397173c172d84ff2eda35"} Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.035611 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.126628 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjmr9\" (UniqueName: \"kubernetes.io/projected/26cbc436-b0b3-4961-8a0e-48f797f04b5c-kube-api-access-zjmr9\") pod \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.126874 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-scripts\") pod \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.127012 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-config-data\") pod \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.127118 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-combined-ca-bundle\") pod \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\" (UID: \"26cbc436-b0b3-4961-8a0e-48f797f04b5c\") " Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.133222 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-scripts" (OuterVolumeSpecName: "scripts") pod "26cbc436-b0b3-4961-8a0e-48f797f04b5c" (UID: "26cbc436-b0b3-4961-8a0e-48f797f04b5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.142122 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cbc436-b0b3-4961-8a0e-48f797f04b5c-kube-api-access-zjmr9" (OuterVolumeSpecName: "kube-api-access-zjmr9") pod "26cbc436-b0b3-4961-8a0e-48f797f04b5c" (UID: "26cbc436-b0b3-4961-8a0e-48f797f04b5c"). InnerVolumeSpecName "kube-api-access-zjmr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.170559 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26cbc436-b0b3-4961-8a0e-48f797f04b5c" (UID: "26cbc436-b0b3-4961-8a0e-48f797f04b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.172968 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-config-data" (OuterVolumeSpecName: "config-data") pod "26cbc436-b0b3-4961-8a0e-48f797f04b5c" (UID: "26cbc436-b0b3-4961-8a0e-48f797f04b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.239028 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.239073 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.239088 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cbc436-b0b3-4961-8a0e-48f797f04b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.239101 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjmr9\" (UniqueName: \"kubernetes.io/projected/26cbc436-b0b3-4961-8a0e-48f797f04b5c-kube-api-access-zjmr9\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.613500 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xkld5" event={"ID":"26cbc436-b0b3-4961-8a0e-48f797f04b5c","Type":"ContainerDied","Data":"e18913ae40e35417f878ee0c84a4b1f8685daa8ac21cf74d86db5023cf822a4d"} Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.613541 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xkld5" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.613554 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18913ae40e35417f878ee0c84a4b1f8685daa8ac21cf74d86db5023cf822a4d" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.726464 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 14:59:46 crc kubenswrapper[4727]: E1210 14:59:46.727025 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8089752a-539b-4f05-81d6-ef383b753227" containerName="extract-content" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727044 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8089752a-539b-4f05-81d6-ef383b753227" containerName="extract-content" Dec 10 14:59:46 crc kubenswrapper[4727]: E1210 14:59:46.727063 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8089752a-539b-4f05-81d6-ef383b753227" containerName="extract-utilities" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727071 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8089752a-539b-4f05-81d6-ef383b753227" containerName="extract-utilities" Dec 10 14:59:46 crc kubenswrapper[4727]: E1210 14:59:46.727086 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cbc436-b0b3-4961-8a0e-48f797f04b5c" containerName="nova-cell0-conductor-db-sync" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727092 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cbc436-b0b3-4961-8a0e-48f797f04b5c" containerName="nova-cell0-conductor-db-sync" Dec 10 14:59:46 crc kubenswrapper[4727]: E1210 14:59:46.727108 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8089752a-539b-4f05-81d6-ef383b753227" containerName="registry-server" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727114 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8089752a-539b-4f05-81d6-ef383b753227" containerName="registry-server" Dec 10 14:59:46 crc kubenswrapper[4727]: E1210 14:59:46.727129 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerName="extract-content" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727135 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerName="extract-content" Dec 10 14:59:46 crc kubenswrapper[4727]: E1210 14:59:46.727142 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerName="registry-server" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727148 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerName="registry-server" Dec 10 14:59:46 crc kubenswrapper[4727]: E1210 14:59:46.727162 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerName="extract-utilities" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727168 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerName="extract-utilities" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727370 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cbc436-b0b3-4961-8a0e-48f797f04b5c" containerName="nova-cell0-conductor-db-sync" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727383 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8089752a-539b-4f05-81d6-ef383b753227" containerName="registry-server" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.727398 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5ade71-a9d1-4539-a51a-1c2d458e1320" containerName="registry-server" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.728207 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.731692 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.731946 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-589zt" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.753355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc159773-04ff-4e53-8ab1-7a292491299e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bc159773-04ff-4e53-8ab1-7a292491299e\") " pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.753515 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc159773-04ff-4e53-8ab1-7a292491299e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bc159773-04ff-4e53-8ab1-7a292491299e\") " pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.753753 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgc88\" (UniqueName: \"kubernetes.io/projected/bc159773-04ff-4e53-8ab1-7a292491299e-kube-api-access-rgc88\") pod \"nova-cell0-conductor-0\" (UID: \"bc159773-04ff-4e53-8ab1-7a292491299e\") " pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.756086 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.857746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc159773-04ff-4e53-8ab1-7a292491299e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bc159773-04ff-4e53-8ab1-7a292491299e\") " pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.858114 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgc88\" (UniqueName: \"kubernetes.io/projected/bc159773-04ff-4e53-8ab1-7a292491299e-kube-api-access-rgc88\") pod \"nova-cell0-conductor-0\" (UID: \"bc159773-04ff-4e53-8ab1-7a292491299e\") " pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.858214 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc159773-04ff-4e53-8ab1-7a292491299e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bc159773-04ff-4e53-8ab1-7a292491299e\") " pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.875721 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc159773-04ff-4e53-8ab1-7a292491299e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bc159773-04ff-4e53-8ab1-7a292491299e\") " pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.878238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc159773-04ff-4e53-8ab1-7a292491299e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bc159773-04ff-4e53-8ab1-7a292491299e\") " pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:46 crc kubenswrapper[4727]: I1210 14:59:46.880991 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgc88\" (UniqueName: \"kubernetes.io/projected/bc159773-04ff-4e53-8ab1-7a292491299e-kube-api-access-rgc88\") pod \"nova-cell0-conductor-0\" (UID: \"bc159773-04ff-4e53-8ab1-7a292491299e\") " pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:47 crc kubenswrapper[4727]: I1210 14:59:47.074414 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:47 crc kubenswrapper[4727]: I1210 14:59:47.551077 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 14:59:47 crc kubenswrapper[4727]: W1210 14:59:47.559355 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc159773_04ff_4e53_8ab1_7a292491299e.slice/crio-da11eb2e88519baab5f0a4f9d22d20f0bdebb52c0fa4b4e14fe52c1011adb071 WatchSource:0}: Error finding container da11eb2e88519baab5f0a4f9d22d20f0bdebb52c0fa4b4e14fe52c1011adb071: Status 404 returned error can't find the container with id da11eb2e88519baab5f0a4f9d22d20f0bdebb52c0fa4b4e14fe52c1011adb071 Dec 10 14:59:47 crc kubenswrapper[4727]: I1210 14:59:47.632651 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bc159773-04ff-4e53-8ab1-7a292491299e","Type":"ContainerStarted","Data":"da11eb2e88519baab5f0a4f9d22d20f0bdebb52c0fa4b4e14fe52c1011adb071"} Dec 10 14:59:48 crc kubenswrapper[4727]: I1210 14:59:48.647136 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bc159773-04ff-4e53-8ab1-7a292491299e","Type":"ContainerStarted","Data":"f0368cf4b96a56654a5c77e7612274179dd5cca553a21477848922b6b975e341"} Dec 10 14:59:48 crc kubenswrapper[4727]: I1210 14:59:48.648710 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:48 crc kubenswrapper[4727]: I1210 14:59:48.675845 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.675822008 podStartE2EDuration="2.675822008s" podCreationTimestamp="2025-12-10 14:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:48.663987629 +0000 UTC m=+1692.858762171" watchObservedRunningTime="2025-12-10 14:59:48.675822008 +0000 UTC m=+1692.870596550" Dec 10 14:59:49 crc kubenswrapper[4727]: I1210 14:59:49.827492 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:49 crc kubenswrapper[4727]: I1210 14:59:49.827822 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="sg-core" containerID="cri-o://5eaacda7a4edd18755cb2532a9120e0fdb6e76b5671180b3d1a0ce3bc2d7d124" gracePeriod=30 Dec 10 14:59:49 crc kubenswrapper[4727]: I1210 14:59:49.827835 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="ceilometer-notification-agent" containerID="cri-o://b8f89e796222aa3865301b7fe9d57831de93d10e06621cbfbad281b496d58c6c" gracePeriod=30 Dec 10 14:59:49 crc kubenswrapper[4727]: I1210 14:59:49.827937 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="proxy-httpd" containerID="cri-o://976415abce4149cef2f2bf6c4d947647a9154404f4b53ee7f2b40b5bfcef535e" gracePeriod=30 Dec 10 14:59:49 crc kubenswrapper[4727]: I1210 14:59:49.827776 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="ceilometer-central-agent" containerID="cri-o://526ee02f6b6d318bc473490ef5c69590b95e08d6d691aff202d55b00ceea77af" gracePeriod=30 Dec 10 14:59:49 crc kubenswrapper[4727]: I1210 14:59:49.835510 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.204:3000/\": EOF" Dec 10 14:59:50 crc kubenswrapper[4727]: I1210 14:59:50.674199 4727 generic.go:334] "Generic (PLEG): container finished" podID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerID="976415abce4149cef2f2bf6c4d947647a9154404f4b53ee7f2b40b5bfcef535e" exitCode=0 Dec 10 14:59:50 crc kubenswrapper[4727]: I1210 14:59:50.674788 4727 generic.go:334] "Generic (PLEG): container finished" podID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerID="5eaacda7a4edd18755cb2532a9120e0fdb6e76b5671180b3d1a0ce3bc2d7d124" exitCode=2 Dec 10 14:59:50 crc kubenswrapper[4727]: I1210 14:59:50.674802 4727 generic.go:334] "Generic (PLEG): container finished" podID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerID="526ee02f6b6d318bc473490ef5c69590b95e08d6d691aff202d55b00ceea77af" exitCode=0 Dec 10 14:59:50 crc kubenswrapper[4727]: I1210 14:59:50.676518 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerDied","Data":"976415abce4149cef2f2bf6c4d947647a9154404f4b53ee7f2b40b5bfcef535e"} Dec 10 14:59:50 crc kubenswrapper[4727]: I1210 14:59:50.676672 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerDied","Data":"5eaacda7a4edd18755cb2532a9120e0fdb6e76b5671180b3d1a0ce3bc2d7d124"} Dec 10 14:59:50 crc kubenswrapper[4727]: I1210 14:59:50.676686 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerDied","Data":"526ee02f6b6d318bc473490ef5c69590b95e08d6d691aff202d55b00ceea77af"} Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.113537 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.698434 4727 generic.go:334] "Generic (PLEG): container finished" podID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerID="b8f89e796222aa3865301b7fe9d57831de93d10e06621cbfbad281b496d58c6c" exitCode=0 Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.698533 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerDied","Data":"b8f89e796222aa3865301b7fe9d57831de93d10e06621cbfbad281b496d58c6c"} Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.698766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4124e168-5d24-4eb5-aada-4add345eb9e2","Type":"ContainerDied","Data":"69270a344394b0cd692b726e6c13e1c2d62807ae294cd2acba8d86f1a732c184"} Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.698786 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69270a344394b0cd692b726e6c13e1c2d62807ae294cd2acba8d86f1a732c184" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.736289 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.793464 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-log-httpd\") pod \"4124e168-5d24-4eb5-aada-4add345eb9e2\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.793577 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-sg-core-conf-yaml\") pod \"4124e168-5d24-4eb5-aada-4add345eb9e2\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.793681 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvrdp\" (UniqueName: \"kubernetes.io/projected/4124e168-5d24-4eb5-aada-4add345eb9e2-kube-api-access-gvrdp\") pod \"4124e168-5d24-4eb5-aada-4add345eb9e2\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.793768 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-combined-ca-bundle\") pod \"4124e168-5d24-4eb5-aada-4add345eb9e2\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.793834 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-config-data\") pod \"4124e168-5d24-4eb5-aada-4add345eb9e2\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.793956 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-run-httpd\") pod \"4124e168-5d24-4eb5-aada-4add345eb9e2\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.793978 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-scripts\") pod \"4124e168-5d24-4eb5-aada-4add345eb9e2\" (UID: \"4124e168-5d24-4eb5-aada-4add345eb9e2\") " Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.794093 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4124e168-5d24-4eb5-aada-4add345eb9e2" (UID: "4124e168-5d24-4eb5-aada-4add345eb9e2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.794548 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.794628 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4124e168-5d24-4eb5-aada-4add345eb9e2" (UID: "4124e168-5d24-4eb5-aada-4add345eb9e2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.800114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-scripts" (OuterVolumeSpecName: "scripts") pod "4124e168-5d24-4eb5-aada-4add345eb9e2" (UID: "4124e168-5d24-4eb5-aada-4add345eb9e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.811169 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4124e168-5d24-4eb5-aada-4add345eb9e2-kube-api-access-gvrdp" (OuterVolumeSpecName: "kube-api-access-gvrdp") pod "4124e168-5d24-4eb5-aada-4add345eb9e2" (UID: "4124e168-5d24-4eb5-aada-4add345eb9e2"). InnerVolumeSpecName "kube-api-access-gvrdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.834162 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4124e168-5d24-4eb5-aada-4add345eb9e2" (UID: "4124e168-5d24-4eb5-aada-4add345eb9e2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.898836 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4124e168-5d24-4eb5-aada-4add345eb9e2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.898875 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.898887 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.898900 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvrdp\" (UniqueName: \"kubernetes.io/projected/4124e168-5d24-4eb5-aada-4add345eb9e2-kube-api-access-gvrdp\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.944130 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-t2shg"] Dec 10 14:59:52 crc kubenswrapper[4727]: E1210 14:59:52.944857 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="ceilometer-central-agent" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.944954 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="ceilometer-central-agent" Dec 10 14:59:52 crc kubenswrapper[4727]: E1210 14:59:52.945028 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="sg-core" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.945091 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="sg-core" Dec 10 14:59:52 crc kubenswrapper[4727]: E1210 14:59:52.945158 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="proxy-httpd" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.945218 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="proxy-httpd" Dec 10 14:59:52 crc kubenswrapper[4727]: E1210 14:59:52.945295 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="ceilometer-notification-agent" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.945357 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="ceilometer-notification-agent" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.945625 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="proxy-httpd" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.945702 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="sg-core" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.945763 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="ceilometer-notification-agent" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.945826 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" containerName="ceilometer-central-agent" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.946854 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.948044 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-config-data" (OuterVolumeSpecName: "config-data") pod "4124e168-5d24-4eb5-aada-4add345eb9e2" (UID: "4124e168-5d24-4eb5-aada-4add345eb9e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.950107 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.953338 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.963333 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-t2shg"] Dec 10 14:59:52 crc kubenswrapper[4727]: I1210 14:59:52.966342 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4124e168-5d24-4eb5-aada-4add345eb9e2" (UID: "4124e168-5d24-4eb5-aada-4add345eb9e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.001450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5xm\" (UniqueName: \"kubernetes.io/projected/e777d780-3582-474d-997c-6bb3f1b108da-kube-api-access-fz5xm\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.001606 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-config-data\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.001658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.002593 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-scripts\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.003189 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.003211 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4124e168-5d24-4eb5-aada-4add345eb9e2-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.105592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5xm\" (UniqueName: \"kubernetes.io/projected/e777d780-3582-474d-997c-6bb3f1b108da-kube-api-access-fz5xm\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.106068 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-config-data\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.106122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.106238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-scripts\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.116899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-scripts\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.120690 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.133660 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-config-data\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.155631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5xm\" (UniqueName: \"kubernetes.io/projected/e777d780-3582-474d-997c-6bb3f1b108da-kube-api-access-fz5xm\") pod \"nova-cell0-cell-mapping-t2shg\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.166280 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.167887 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.173340 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.188160 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.208745 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-448s9\" (UniqueName: \"kubernetes.io/projected/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-kube-api-access-448s9\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.208995 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.209153 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.312377 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.314281 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.314383 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.314464 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-448s9\" (UniqueName: \"kubernetes.io/projected/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-kube-api-access-448s9\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.315523 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.323727 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.324216 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.330784 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.331187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.359507 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.365660 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.391806 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.397782 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-448s9\" (UniqueName: \"kubernetes.io/projected/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-kube-api-access-448s9\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.404335 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.418138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.418217 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-logs\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.418269 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-config-data\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.418293 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6gjp\" (UniqueName: \"kubernetes.io/projected/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-kube-api-access-s6gjp\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.418432 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-config-data\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.422202 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.458287 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.459969 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.465233 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.466864 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.495690 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.520213 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qstdv\" (UniqueName: \"kubernetes.io/projected/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-kube-api-access-qstdv\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.520313 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-config-data\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.520370 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.520403 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.520458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-logs\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.520516 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-logs\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.520545 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-config-data\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.520570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6gjp\" (UniqueName: \"kubernetes.io/projected/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-kube-api-access-s6gjp\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.522318 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-logs\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.529727 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.532543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-config-data\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.536892 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-lvfng"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.556295 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-config-data\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.579262 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6gjp\" (UniqueName: \"kubernetes.io/projected/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-kube-api-access-s6gjp\") pod \"nova-metadata-0\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.592811 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.630781 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9cg\" (UniqueName: \"kubernetes.io/projected/a1bdf679-e9e9-453d-9354-38a2873b37a7-kube-api-access-vh9cg\") pod \"nova-scheduler-0\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.630880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.630986 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qstdv\" (UniqueName: \"kubernetes.io/projected/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-kube-api-access-qstdv\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.631196 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.631245 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-config-data\") pod \"nova-scheduler-0\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.631377 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-logs\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.633491 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-lvfng"] Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.646778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-logs\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.661181 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.661402 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qstdv\" (UniqueName: \"kubernetes.io/projected/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-kube-api-access-qstdv\") pod \"nova-api-0\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.684841 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.715656 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.744013 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.744109 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45b7z\" (UniqueName: \"kubernetes.io/projected/13fd83eb-02fc-4d4a-938c-0c363d300ae6-kube-api-access-45b7z\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.744275 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-svc\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.747557 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-config\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.747675 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.747762 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh9cg\" (UniqueName: \"kubernetes.io/projected/a1bdf679-e9e9-453d-9354-38a2873b37a7-kube-api-access-vh9cg\") pod \"nova-scheduler-0\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.747897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.748360 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.748503 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.748663 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-config-data\") pod \"nova-scheduler-0\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.761430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-config-data\") pod \"nova-scheduler-0\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.769029 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.770541 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh9cg\" (UniqueName: \"kubernetes.io/projected/a1bdf679-e9e9-453d-9354-38a2873b37a7-kube-api-access-vh9cg\") pod \"nova-scheduler-0\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.828552 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.851538 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.853434 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.857250 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45b7z\" (UniqueName: \"kubernetes.io/projected/13fd83eb-02fc-4d4a-938c-0c363d300ae6-kube-api-access-45b7z\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.857410 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-svc\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.857578 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-config\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.858130 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.858214 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.859172 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-svc\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.862650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.864027 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-config\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.866052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.894050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45b7z\" (UniqueName: \"kubernetes.io/projected/13fd83eb-02fc-4d4a-938c-0c363d300ae6-kube-api-access-45b7z\") pod \"dnsmasq-dns-78cd565959-lvfng\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:53 crc kubenswrapper[4727]: I1210 14:59:53.933872 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.061150 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.100034 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.148553 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.152108 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.171014 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.171174 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.172860 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-config-data\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.177732 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-run-httpd\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.178531 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.178561 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qds5m\" (UniqueName: \"kubernetes.io/projected/949f6395-0c21-4bab-9cc1-6dc83aad5aac-kube-api-access-qds5m\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.178608 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-log-httpd\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.178636 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-scripts\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.180706 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.194196 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.253340 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-t2shg"] Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.283240 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-config-data\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.283305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-run-httpd\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.283432 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.283453 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qds5m\" (UniqueName: \"kubernetes.io/projected/949f6395-0c21-4bab-9cc1-6dc83aad5aac-kube-api-access-qds5m\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.283473 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-log-httpd\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.283495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-scripts\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.283550 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.288200 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-log-httpd\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.288200 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-run-httpd\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.291533 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.291696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-config-data\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.292404 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.292410 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-scripts\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.311994 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qds5m\" (UniqueName: \"kubernetes.io/projected/949f6395-0c21-4bab-9cc1-6dc83aad5aac-kube-api-access-qds5m\") pod \"ceilometer-0\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " pod="openstack/ceilometer-0" Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.550567 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 14:59:54 crc kubenswrapper[4727]: I1210 14:59:54.586958 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.314259 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4124e168-5d24-4eb5-aada-4add345eb9e2" path="/var/lib/kubelet/pods/4124e168-5d24-4eb5-aada-4add345eb9e2/volumes" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.320828 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-t2shg" event={"ID":"e777d780-3582-474d-997c-6bb3f1b108da","Type":"ContainerStarted","Data":"0f8d8536ff931b93fdc39019f80f39b3f7b30a160de62ff8388b0d2a5c1d1e42"} Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.320888 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5d4rn"] Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.338624 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3180cdb-857d-4bc1-9b84-a872dfa84cfe","Type":"ContainerStarted","Data":"4196537acd17c073f4b6f2fa8dcbb59a1b6b4dd9db0741d8c157838a8d130a9c"} Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.338688 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5d4rn"] Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.338806 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.363649 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.363975 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.391510 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.391620 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-config-data\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.391682 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-scripts\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.391737 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsvt\" (UniqueName: \"kubernetes.io/projected/f00dd75e-d42e-41fa-93f2-728409ffcb47-kube-api-access-pnsvt\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.551605 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.620542 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.635012 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.646698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-lvfng"] Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.652003 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-scripts\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.652119 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnsvt\" (UniqueName: \"kubernetes.io/projected/f00dd75e-d42e-41fa-93f2-728409ffcb47-kube-api-access-pnsvt\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.652515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.652645 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-config-data\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.709251 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnsvt\" (UniqueName: \"kubernetes.io/projected/f00dd75e-d42e-41fa-93f2-728409ffcb47-kube-api-access-pnsvt\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.712667 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.722487 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-scripts\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.756604 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-config-data\") pod \"nova-cell1-conductor-db-sync-5d4rn\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.775254 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 14:59:55 crc kubenswrapper[4727]: I1210 14:59:55.959179 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-lvfng" event={"ID":"13fd83eb-02fc-4d4a-938c-0c363d300ae6","Type":"ContainerStarted","Data":"a39b1f24f069be2e1a016ba8b6076c9585e5bcd3dcd65c0c2dad1b96ed89308b"} Dec 10 14:59:56 crc kubenswrapper[4727]: I1210 14:59:56.019496 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc","Type":"ContainerStarted","Data":"0b10bb5d55d300aeeb220436835978268085e05ec8dff5db2bb285117ba25d67"} Dec 10 14:59:56 crc kubenswrapper[4727]: I1210 14:59:56.072983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab","Type":"ContainerStarted","Data":"3e6d0a49620002d518312fed00e14d18e5f9bacbb96452e5c167546d08c0a11b"} Dec 10 14:59:56 crc kubenswrapper[4727]: I1210 14:59:56.142937 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:56 crc kubenswrapper[4727]: I1210 14:59:56.195168 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1bdf679-e9e9-453d-9354-38a2873b37a7","Type":"ContainerStarted","Data":"41cf7830ee2c86ad60f8413140c96efca0c8ac9a34d9bf9be9dc94ad93d7fb58"} Dec 10 14:59:56 crc kubenswrapper[4727]: I1210 14:59:56.780147 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5d4rn"] Dec 10 14:59:57 crc kubenswrapper[4727]: I1210 14:59:57.217551 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerStarted","Data":"d29f558227f51f4c19e24e8d390a9b465ad4eb3c0bbbf39944f9be4112b7a820"} Dec 10 14:59:57 crc kubenswrapper[4727]: I1210 14:59:57.237827 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-t2shg" event={"ID":"e777d780-3582-474d-997c-6bb3f1b108da","Type":"ContainerStarted","Data":"5cc57ba2b2e184270eb446c31167b9ef0b6acc658255c2e37103bb617db09f61"} Dec 10 14:59:57 crc kubenswrapper[4727]: I1210 14:59:57.257699 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5d4rn" event={"ID":"f00dd75e-d42e-41fa-93f2-728409ffcb47","Type":"ContainerStarted","Data":"91d2599f586d9d843e7d86878d197f06ae4719c28280f0f1d45ea4c63c3d850a"} Dec 10 14:59:57 crc kubenswrapper[4727]: I1210 14:59:57.290124 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-t2shg" podStartSLOduration=5.290096763 podStartE2EDuration="5.290096763s" podCreationTimestamp="2025-12-10 14:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:57.279841004 +0000 UTC m=+1701.474615546" watchObservedRunningTime="2025-12-10 14:59:57.290096763 +0000 UTC m=+1701.484871305" Dec 10 14:59:58 crc kubenswrapper[4727]: I1210 14:59:58.134999 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 14:59:58 crc kubenswrapper[4727]: I1210 14:59:58.303468 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-lvfng" event={"ID":"13fd83eb-02fc-4d4a-938c-0c363d300ae6","Type":"ContainerStarted","Data":"543588c741b9719e1876f3726d37cfe51c76bc752e3539b493082b6037d25d85"} Dec 10 14:59:58 crc kubenswrapper[4727]: I1210 14:59:58.327371 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 14:59:59 crc kubenswrapper[4727]: I1210 14:59:59.329425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5d4rn" event={"ID":"f00dd75e-d42e-41fa-93f2-728409ffcb47","Type":"ContainerStarted","Data":"2c30ce2bb5b57642f62585f23a3a6eca2a90960fd0dca6100f3e33d90499b830"} Dec 10 14:59:59 crc kubenswrapper[4727]: I1210 14:59:59.340083 4727 generic.go:334] "Generic (PLEG): container finished" podID="13fd83eb-02fc-4d4a-938c-0c363d300ae6" containerID="543588c741b9719e1876f3726d37cfe51c76bc752e3539b493082b6037d25d85" exitCode=0 Dec 10 14:59:59 crc kubenswrapper[4727]: I1210 14:59:59.340134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-lvfng" event={"ID":"13fd83eb-02fc-4d4a-938c-0c363d300ae6","Type":"ContainerDied","Data":"543588c741b9719e1876f3726d37cfe51c76bc752e3539b493082b6037d25d85"} Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.176645 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x"] Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.179617 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.184444 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.185149 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.229483 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x"] Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.360858 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p9gz\" (UniqueName: \"kubernetes.io/projected/63ce4724-56ea-4220-ab50-7f83b039cd49-kube-api-access-4p9gz\") pod \"collect-profiles-29422980-qfq9x\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.360968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ce4724-56ea-4220-ab50-7f83b039cd49-config-volume\") pod \"collect-profiles-29422980-qfq9x\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.361201 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ce4724-56ea-4220-ab50-7f83b039cd49-secret-volume\") pod \"collect-profiles-29422980-qfq9x\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.462476 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p9gz\" (UniqueName: \"kubernetes.io/projected/63ce4724-56ea-4220-ab50-7f83b039cd49-kube-api-access-4p9gz\") pod \"collect-profiles-29422980-qfq9x\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.462528 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ce4724-56ea-4220-ab50-7f83b039cd49-config-volume\") pod \"collect-profiles-29422980-qfq9x\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.462627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ce4724-56ea-4220-ab50-7f83b039cd49-secret-volume\") pod \"collect-profiles-29422980-qfq9x\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.464350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ce4724-56ea-4220-ab50-7f83b039cd49-config-volume\") pod \"collect-profiles-29422980-qfq9x\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.473331 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5d4rn" podStartSLOduration=6.473304938 podStartE2EDuration="6.473304938s" podCreationTimestamp="2025-12-10 14:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:00.433416459 +0000 UTC m=+1704.628191001" watchObservedRunningTime="2025-12-10 15:00:00.473304938 +0000 UTC m=+1704.668079480" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.482743 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ce4724-56ea-4220-ab50-7f83b039cd49-secret-volume\") pod \"collect-profiles-29422980-qfq9x\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.503773 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p9gz\" (UniqueName: \"kubernetes.io/projected/63ce4724-56ea-4220-ab50-7f83b039cd49-kube-api-access-4p9gz\") pod \"collect-profiles-29422980-qfq9x\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:00 crc kubenswrapper[4727]: I1210 15:00:00.552574 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:01 crc kubenswrapper[4727]: I1210 15:00:01.420117 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-lvfng" event={"ID":"13fd83eb-02fc-4d4a-938c-0c363d300ae6","Type":"ContainerStarted","Data":"89f32ed4f135efb6c1c8cd2e48718ff1c97bb00aeea6bb188d939cadf69f5d63"} Dec 10 15:00:01 crc kubenswrapper[4727]: I1210 15:00:01.420396 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 15:00:01 crc kubenswrapper[4727]: I1210 15:00:01.457046 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-lvfng" podStartSLOduration=8.457002143 podStartE2EDuration="8.457002143s" podCreationTimestamp="2025-12-10 14:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:01.454421678 +0000 UTC m=+1705.649196220" watchObservedRunningTime="2025-12-10 15:00:01.457002143 +0000 UTC m=+1705.651776685" Dec 10 15:00:07 crc kubenswrapper[4727]: I1210 15:00:07.724114 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:00:07 crc kubenswrapper[4727]: I1210 15:00:07.725259 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:00:08 crc kubenswrapper[4727]: I1210 15:00:08.937171 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 15:00:09 crc kubenswrapper[4727]: I1210 15:00:09.017123 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l2cjd"] Dec 10 15:00:09 crc kubenswrapper[4727]: I1210 15:00:09.017427 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" podUID="2d94959d-f645-4e03-b152-bbd85cf1a91c" containerName="dnsmasq-dns" containerID="cri-o://05f92f9e106fdc772624f69a999f044d049a14fd8c1748b0967af6baf585831a" gracePeriod=10 Dec 10 15:00:09 crc kubenswrapper[4727]: I1210 15:00:09.456695 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x"] Dec 10 15:00:09 crc kubenswrapper[4727]: I1210 15:00:09.624214 4727 generic.go:334] "Generic (PLEG): container finished" podID="2d94959d-f645-4e03-b152-bbd85cf1a91c" containerID="05f92f9e106fdc772624f69a999f044d049a14fd8c1748b0967af6baf585831a" exitCode=0 Dec 10 15:00:09 crc kubenswrapper[4727]: I1210 15:00:09.624269 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" event={"ID":"2d94959d-f645-4e03-b152-bbd85cf1a91c","Type":"ContainerDied","Data":"05f92f9e106fdc772624f69a999f044d049a14fd8c1748b0967af6baf585831a"} Dec 10 15:00:10 crc kubenswrapper[4727]: W1210 15:00:10.887649 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ce4724_56ea_4220_ab50_7f83b039cd49.slice/crio-d831f1ebdc590a2fe9cef85e62c67918c1379efbbeacf73b386cf84008e1c148 WatchSource:0}: Error finding container d831f1ebdc590a2fe9cef85e62c67918c1379efbbeacf73b386cf84008e1c148: Status 404 returned error can't find the container with id d831f1ebdc590a2fe9cef85e62c67918c1379efbbeacf73b386cf84008e1c148 Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.667211 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" event={"ID":"63ce4724-56ea-4220-ab50-7f83b039cd49","Type":"ContainerStarted","Data":"d831f1ebdc590a2fe9cef85e62c67918c1379efbbeacf73b386cf84008e1c148"} Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.671966 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.679267 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" event={"ID":"2d94959d-f645-4e03-b152-bbd85cf1a91c","Type":"ContainerDied","Data":"e26dbc0f973fc3b737baea4926ef722e9b02bc7e9842225adfebb97b8f2fead4"} Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.679436 4727 scope.go:117] "RemoveContainer" containerID="05f92f9e106fdc772624f69a999f044d049a14fd8c1748b0967af6baf585831a" Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.944691 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-swift-storage-0\") pod \"2d94959d-f645-4e03-b152-bbd85cf1a91c\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.945054 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8vf4\" (UniqueName: \"kubernetes.io/projected/2d94959d-f645-4e03-b152-bbd85cf1a91c-kube-api-access-c8vf4\") pod \"2d94959d-f645-4e03-b152-bbd85cf1a91c\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.945135 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-nb\") pod \"2d94959d-f645-4e03-b152-bbd85cf1a91c\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.945162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-svc\") pod \"2d94959d-f645-4e03-b152-bbd85cf1a91c\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.945292 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-sb\") pod \"2d94959d-f645-4e03-b152-bbd85cf1a91c\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " Dec 10 15:00:11 crc kubenswrapper[4727]: I1210 15:00:11.945375 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-config\") pod \"2d94959d-f645-4e03-b152-bbd85cf1a91c\" (UID: \"2d94959d-f645-4e03-b152-bbd85cf1a91c\") " Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.002283 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d94959d-f645-4e03-b152-bbd85cf1a91c-kube-api-access-c8vf4" (OuterVolumeSpecName: "kube-api-access-c8vf4") pod "2d94959d-f645-4e03-b152-bbd85cf1a91c" (UID: "2d94959d-f645-4e03-b152-bbd85cf1a91c"). InnerVolumeSpecName "kube-api-access-c8vf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.008335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerStarted","Data":"d73999af619850b6c8a9f166583e763b52f1612f86dbbe752bca052eb1da1c5f"} Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.052354 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8vf4\" (UniqueName: \"kubernetes.io/projected/2d94959d-f645-4e03-b152-bbd85cf1a91c-kube-api-access-c8vf4\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.094766 4727 generic.go:334] "Generic (PLEG): container finished" podID="e777d780-3582-474d-997c-6bb3f1b108da" containerID="5cc57ba2b2e184270eb446c31167b9ef0b6acc658255c2e37103bb617db09f61" exitCode=0 Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.095021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-t2shg" event={"ID":"e777d780-3582-474d-997c-6bb3f1b108da","Type":"ContainerDied","Data":"5cc57ba2b2e184270eb446c31167b9ef0b6acc658255c2e37103bb617db09f61"} Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.194348 4727 scope.go:117] "RemoveContainer" containerID="ad6ccaf8a6fcc37aa6ee60e7f21f0188e8b29e2ef9a42590c62227008fb66675" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.533870 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d94959d-f645-4e03-b152-bbd85cf1a91c" (UID: "2d94959d-f645-4e03-b152-bbd85cf1a91c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.559180 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d94959d-f645-4e03-b152-bbd85cf1a91c" (UID: "2d94959d-f645-4e03-b152-bbd85cf1a91c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.559948 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d94959d-f645-4e03-b152-bbd85cf1a91c" (UID: "2d94959d-f645-4e03-b152-bbd85cf1a91c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.562075 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d94959d-f645-4e03-b152-bbd85cf1a91c" (UID: "2d94959d-f645-4e03-b152-bbd85cf1a91c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.567269 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-config" (OuterVolumeSpecName: "config") pod "2d94959d-f645-4e03-b152-bbd85cf1a91c" (UID: "2d94959d-f645-4e03-b152-bbd85cf1a91c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.574735 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.574761 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.574775 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.574963 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:12 crc kubenswrapper[4727]: I1210 15:00:12.574975 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d94959d-f645-4e03-b152-bbd85cf1a91c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.058186 4727 scope.go:117] "RemoveContainer" containerID="0c373e406dcb4de03c5abe950920e7bc30c36af6809476cc922c052470553642" Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.171880 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab","Type":"ContainerStarted","Data":"a170b2c83d9b608310812253a06944356ce725c0af856be9074e6f1edac1394d"} Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.174287 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3180cdb-857d-4bc1-9b84-a872dfa84cfe","Type":"ContainerStarted","Data":"4bb6fac66e46b198a30cd0223acffe3f1b3fc68dd21e8f4234a120027dcedcf0"} Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.174447 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b3180cdb-857d-4bc1-9b84-a872dfa84cfe" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4bb6fac66e46b198a30cd0223acffe3f1b3fc68dd21e8f4234a120027dcedcf0" gracePeriod=30 Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.179046 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-l2cjd" Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.182737 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc","Type":"ContainerStarted","Data":"2fc76156b8e222213a04f77efb3d1ac4745eeeab89fa55299145536b3dafa3b9"} Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.185712 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1bdf679-e9e9-453d-9354-38a2873b37a7","Type":"ContainerStarted","Data":"2333e2aa523fe595ec92cd3183b859af5d30c52b84691167903304a47b69e2b3"} Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.191964 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" event={"ID":"63ce4724-56ea-4220-ab50-7f83b039cd49","Type":"ContainerStarted","Data":"e87c35bd13e754165157b99bec5fb308dcd1345813a728d6bc67b5471d1ee80a"} Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.227876 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.470306464 podStartE2EDuration="20.227816167s" podCreationTimestamp="2025-12-10 14:59:53 +0000 UTC" firstStartedPulling="2025-12-10 14:59:54.568889005 +0000 UTC m=+1698.763663557" lastFinishedPulling="2025-12-10 15:00:11.326398718 +0000 UTC m=+1715.521173260" observedRunningTime="2025-12-10 15:00:13.204976409 +0000 UTC m=+1717.399750951" watchObservedRunningTime="2025-12-10 15:00:13.227816167 +0000 UTC m=+1717.422590729" Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.257169 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.572517566 podStartE2EDuration="20.257140299s" podCreationTimestamp="2025-12-10 14:59:53 +0000 UTC" firstStartedPulling="2025-12-10 14:59:55.67874658 +0000 UTC m=+1699.873521122" lastFinishedPulling="2025-12-10 15:00:11.363369313 +0000 UTC m=+1715.558143855" observedRunningTime="2025-12-10 15:00:13.230920116 +0000 UTC m=+1717.425694658" watchObservedRunningTime="2025-12-10 15:00:13.257140299 +0000 UTC m=+1717.451914841" Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.281471 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" podStartSLOduration=13.281448094 podStartE2EDuration="13.281448094s" podCreationTimestamp="2025-12-10 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:13.249704981 +0000 UTC m=+1717.444479523" watchObservedRunningTime="2025-12-10 15:00:13.281448094 +0000 UTC m=+1717.476222646" Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.319974 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l2cjd"] Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.323295 4727 scope.go:117] "RemoveContainer" containerID="0e7ff727711d81f71d11467f87790881f6b385dcaee1f28a8c38c409ae2fa601" Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.331495 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l2cjd"] Dec 10 15:00:13 crc kubenswrapper[4727]: I1210 15:00:13.497040 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:14 crc kubenswrapper[4727]: I1210 15:00:14.237283 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 15:00:14 crc kubenswrapper[4727]: I1210 15:00:14.237316 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 15:00:14 crc kubenswrapper[4727]: I1210 15:00:14.290097 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab","Type":"ContainerStarted","Data":"5fc2f2cd153936a9715b23157b11b34d32772cbb27ea3f1721eaa78735067acb"} Dec 10 15:00:14 crc kubenswrapper[4727]: I1210 15:00:14.296543 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 15:00:14 crc kubenswrapper[4727]: I1210 15:00:14.408383 4727 scope.go:117] "RemoveContainer" containerID="101502e78d036f6fa46932b317e3f33e89e5d40f792629350a02eeffa7c68f2f" Dec 10 15:00:14 crc kubenswrapper[4727]: I1210 15:00:14.582197 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d94959d-f645-4e03-b152-bbd85cf1a91c" path="/var/lib/kubelet/pods/2d94959d-f645-4e03-b152-bbd85cf1a91c/volumes" Dec 10 15:00:15 crc kubenswrapper[4727]: I1210 15:00:15.334724 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.080364 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.132039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-combined-ca-bundle\") pod \"e777d780-3582-474d-997c-6bb3f1b108da\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.132302 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz5xm\" (UniqueName: \"kubernetes.io/projected/e777d780-3582-474d-997c-6bb3f1b108da-kube-api-access-fz5xm\") pod \"e777d780-3582-474d-997c-6bb3f1b108da\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.147401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e777d780-3582-474d-997c-6bb3f1b108da-kube-api-access-fz5xm" (OuterVolumeSpecName: "kube-api-access-fz5xm") pod "e777d780-3582-474d-997c-6bb3f1b108da" (UID: "e777d780-3582-474d-997c-6bb3f1b108da"). InnerVolumeSpecName "kube-api-access-fz5xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.186461 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e777d780-3582-474d-997c-6bb3f1b108da" (UID: "e777d780-3582-474d-997c-6bb3f1b108da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.235261 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-scripts\") pod \"e777d780-3582-474d-997c-6bb3f1b108da\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.235339 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-config-data\") pod \"e777d780-3582-474d-997c-6bb3f1b108da\" (UID: \"e777d780-3582-474d-997c-6bb3f1b108da\") " Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.235995 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.236023 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz5xm\" (UniqueName: \"kubernetes.io/projected/e777d780-3582-474d-997c-6bb3f1b108da-kube-api-access-fz5xm\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.239088 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-scripts" (OuterVolumeSpecName: "scripts") pod "e777d780-3582-474d-997c-6bb3f1b108da" (UID: "e777d780-3582-474d-997c-6bb3f1b108da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.290528 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-config-data" (OuterVolumeSpecName: "config-data") pod "e777d780-3582-474d-997c-6bb3f1b108da" (UID: "e777d780-3582-474d-997c-6bb3f1b108da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.317177 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-t2shg" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.317306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-t2shg" event={"ID":"e777d780-3582-474d-997c-6bb3f1b108da","Type":"ContainerDied","Data":"0f8d8536ff931b93fdc39019f80f39b3f7b30a160de62ff8388b0d2a5c1d1e42"} Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.317351 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f8d8536ff931b93fdc39019f80f39b3f7b30a160de62ff8388b0d2a5c1d1e42" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.337318 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.337353 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e777d780-3582-474d-997c-6bb3f1b108da-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:16 crc kubenswrapper[4727]: I1210 15:00:16.358851 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=7.559087327 podStartE2EDuration="23.358827112s" podCreationTimestamp="2025-12-10 14:59:53 +0000 UTC" firstStartedPulling="2025-12-10 14:59:55.563626748 +0000 UTC m=+1699.758401290" lastFinishedPulling="2025-12-10 15:00:11.363366543 +0000 UTC m=+1715.558141075" observedRunningTime="2025-12-10 15:00:16.348017678 +0000 UTC m=+1720.542792230" watchObservedRunningTime="2025-12-10 15:00:16.358827112 +0000 UTC m=+1720.553601654" Dec 10 15:00:17 crc kubenswrapper[4727]: I1210 15:00:17.295740 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:00:17 crc kubenswrapper[4727]: I1210 15:00:17.306701 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:17 crc kubenswrapper[4727]: I1210 15:00:17.334802 4727 generic.go:334] "Generic (PLEG): container finished" podID="63ce4724-56ea-4220-ab50-7f83b039cd49" containerID="e87c35bd13e754165157b99bec5fb308dcd1345813a728d6bc67b5471d1ee80a" exitCode=0 Dec 10 15:00:17 crc kubenswrapper[4727]: I1210 15:00:17.335399 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" event={"ID":"63ce4724-56ea-4220-ab50-7f83b039cd49","Type":"ContainerDied","Data":"e87c35bd13e754165157b99bec5fb308dcd1345813a728d6bc67b5471d1ee80a"} Dec 10 15:00:17 crc kubenswrapper[4727]: I1210 15:00:17.335734 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerName="nova-api-log" containerID="cri-o://a170b2c83d9b608310812253a06944356ce725c0af856be9074e6f1edac1394d" gracePeriod=30 Dec 10 15:00:17 crc kubenswrapper[4727]: I1210 15:00:17.336058 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerName="nova-api-api" containerID="cri-o://5fc2f2cd153936a9715b23157b11b34d32772cbb27ea3f1721eaa78735067acb" gracePeriod=30 Dec 10 15:00:18 crc kubenswrapper[4727]: I1210 15:00:18.347967 4727 generic.go:334] "Generic (PLEG): container finished" podID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerID="a170b2c83d9b608310812253a06944356ce725c0af856be9074e6f1edac1394d" exitCode=143 Dec 10 15:00:18 crc kubenswrapper[4727]: I1210 15:00:18.348170 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab","Type":"ContainerDied","Data":"a170b2c83d9b608310812253a06944356ce725c0af856be9074e6f1edac1394d"} Dec 10 15:00:18 crc kubenswrapper[4727]: I1210 15:00:18.348534 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a1bdf679-e9e9-453d-9354-38a2873b37a7" containerName="nova-scheduler-scheduler" containerID="cri-o://2333e2aa523fe595ec92cd3183b859af5d30c52b84691167903304a47b69e2b3" gracePeriod=30 Dec 10 15:00:18 crc kubenswrapper[4727]: E1210 15:00:18.932936 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2333e2aa523fe595ec92cd3183b859af5d30c52b84691167903304a47b69e2b3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:00:18 crc kubenswrapper[4727]: E1210 15:00:18.937686 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2333e2aa523fe595ec92cd3183b859af5d30c52b84691167903304a47b69e2b3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:00:18 crc kubenswrapper[4727]: E1210 15:00:18.939371 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2333e2aa523fe595ec92cd3183b859af5d30c52b84691167903304a47b69e2b3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:00:18 crc kubenswrapper[4727]: E1210 15:00:18.939422 4727 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a1bdf679-e9e9-453d-9354-38a2873b37a7" containerName="nova-scheduler-scheduler" Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.170252 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.348719 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ce4724-56ea-4220-ab50-7f83b039cd49-secret-volume\") pod \"63ce4724-56ea-4220-ab50-7f83b039cd49\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.348951 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ce4724-56ea-4220-ab50-7f83b039cd49-config-volume\") pod \"63ce4724-56ea-4220-ab50-7f83b039cd49\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.349015 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p9gz\" (UniqueName: \"kubernetes.io/projected/63ce4724-56ea-4220-ab50-7f83b039cd49-kube-api-access-4p9gz\") pod \"63ce4724-56ea-4220-ab50-7f83b039cd49\" (UID: \"63ce4724-56ea-4220-ab50-7f83b039cd49\") " Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.350731 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ce4724-56ea-4220-ab50-7f83b039cd49-config-volume" (OuterVolumeSpecName: "config-volume") pod "63ce4724-56ea-4220-ab50-7f83b039cd49" (UID: "63ce4724-56ea-4220-ab50-7f83b039cd49"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.356441 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ce4724-56ea-4220-ab50-7f83b039cd49-kube-api-access-4p9gz" (OuterVolumeSpecName: "kube-api-access-4p9gz") pod "63ce4724-56ea-4220-ab50-7f83b039cd49" (UID: "63ce4724-56ea-4220-ab50-7f83b039cd49"). InnerVolumeSpecName "kube-api-access-4p9gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.361842 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ce4724-56ea-4220-ab50-7f83b039cd49-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "63ce4724-56ea-4220-ab50-7f83b039cd49" (UID: "63ce4724-56ea-4220-ab50-7f83b039cd49"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.400607 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" event={"ID":"63ce4724-56ea-4220-ab50-7f83b039cd49","Type":"ContainerDied","Data":"d831f1ebdc590a2fe9cef85e62c67918c1379efbbeacf73b386cf84008e1c148"} Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.400655 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d831f1ebdc590a2fe9cef85e62c67918c1379efbbeacf73b386cf84008e1c148" Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.400707 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x" Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.452180 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ce4724-56ea-4220-ab50-7f83b039cd49-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.452222 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p9gz\" (UniqueName: \"kubernetes.io/projected/63ce4724-56ea-4220-ab50-7f83b039cd49-kube-api-access-4p9gz\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:19 crc kubenswrapper[4727]: I1210 15:00:19.452238 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ce4724-56ea-4220-ab50-7f83b039cd49-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:20 crc kubenswrapper[4727]: I1210 15:00:20.428104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc","Type":"ContainerStarted","Data":"537430c4bfd1f23b4eac69e6e4d88bdfdb8b359c1070e771aebfd586ac7081cc"} Dec 10 15:00:20 crc kubenswrapper[4727]: I1210 15:00:20.434168 4727 generic.go:334] "Generic (PLEG): container finished" podID="a1bdf679-e9e9-453d-9354-38a2873b37a7" containerID="2333e2aa523fe595ec92cd3183b859af5d30c52b84691167903304a47b69e2b3" exitCode=0 Dec 10 15:00:20 crc kubenswrapper[4727]: I1210 15:00:20.434271 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1bdf679-e9e9-453d-9354-38a2873b37a7","Type":"ContainerDied","Data":"2333e2aa523fe595ec92cd3183b859af5d30c52b84691167903304a47b69e2b3"} Dec 10 15:00:20 crc kubenswrapper[4727]: I1210 15:00:20.436831 4727 generic.go:334] "Generic (PLEG): container finished" podID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerID="5fc2f2cd153936a9715b23157b11b34d32772cbb27ea3f1721eaa78735067acb" exitCode=0 Dec 10 15:00:20 crc kubenswrapper[4727]: I1210 15:00:20.436891 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab","Type":"ContainerDied","Data":"5fc2f2cd153936a9715b23157b11b34d32772cbb27ea3f1721eaa78735067acb"} Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.204566 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.408257 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-combined-ca-bundle\") pod \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.408516 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qstdv\" (UniqueName: \"kubernetes.io/projected/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-kube-api-access-qstdv\") pod \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.408547 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-config-data\") pod \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.408670 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-logs\") pod \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\" (UID: \"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab\") " Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.409815 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-logs" (OuterVolumeSpecName: "logs") pod "b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" (UID: "b11f6fb2-e7ac-4f4d-ae26-6f0266483cab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.414588 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-kube-api-access-qstdv" (OuterVolumeSpecName: "kube-api-access-qstdv") pod "b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" (UID: "b11f6fb2-e7ac-4f4d-ae26-6f0266483cab"). InnerVolumeSpecName "kube-api-access-qstdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.449413 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" (UID: "b11f6fb2-e7ac-4f4d-ae26-6f0266483cab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.475592 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerStarted","Data":"63881adbe210ada6fb903b19a48b9c7cadb46d53decce91a0c1b6a9f1fab6f1a"} Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.477952 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1bdf679-e9e9-453d-9354-38a2873b37a7","Type":"ContainerDied","Data":"41cf7830ee2c86ad60f8413140c96efca0c8ac9a34d9bf9be9dc94ad93d7fb58"} Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.477996 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41cf7830ee2c86ad60f8413140c96efca0c8ac9a34d9bf9be9dc94ad93d7fb58" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.481431 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerName="nova-metadata-log" containerID="cri-o://2fc76156b8e222213a04f77efb3d1ac4745eeeab89fa55299145536b3dafa3b9" gracePeriod=30 Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.481614 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.482188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b11f6fb2-e7ac-4f4d-ae26-6f0266483cab","Type":"ContainerDied","Data":"3e6d0a49620002d518312fed00e14d18e5f9bacbb96452e5c167546d08c0a11b"} Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.482229 4727 scope.go:117] "RemoveContainer" containerID="5fc2f2cd153936a9715b23157b11b34d32772cbb27ea3f1721eaa78735067acb" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.482748 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerName="nova-metadata-metadata" containerID="cri-o://537430c4bfd1f23b4eac69e6e4d88bdfdb8b359c1070e771aebfd586ac7081cc" gracePeriod=30 Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.510742 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.511032 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.511139 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qstdv\" (UniqueName: \"kubernetes.io/projected/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-kube-api-access-qstdv\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.511614 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=12.790782764 podStartE2EDuration="28.511592022s" podCreationTimestamp="2025-12-10 14:59:53 +0000 UTC" firstStartedPulling="2025-12-10 14:59:55.643574461 +0000 UTC m=+1699.838349003" lastFinishedPulling="2025-12-10 15:00:11.364383719 +0000 UTC m=+1715.559158261" observedRunningTime="2025-12-10 15:00:21.506539714 +0000 UTC m=+1725.701314256" watchObservedRunningTime="2025-12-10 15:00:21.511592022 +0000 UTC m=+1725.706366564" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.580618 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.590449 4727 scope.go:117] "RemoveContainer" containerID="a170b2c83d9b608310812253a06944356ce725c0af856be9074e6f1edac1394d" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.625784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-config-data" (OuterVolumeSpecName: "config-data") pod "b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" (UID: "b11f6fb2-e7ac-4f4d-ae26-6f0266483cab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.902808 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh9cg\" (UniqueName: \"kubernetes.io/projected/a1bdf679-e9e9-453d-9354-38a2873b37a7-kube-api-access-vh9cg\") pod \"a1bdf679-e9e9-453d-9354-38a2873b37a7\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.903130 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-config-data\") pod \"a1bdf679-e9e9-453d-9354-38a2873b37a7\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.903328 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-combined-ca-bundle\") pod \"a1bdf679-e9e9-453d-9354-38a2873b37a7\" (UID: \"a1bdf679-e9e9-453d-9354-38a2873b37a7\") " Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.904657 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.926205 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1bdf679-e9e9-453d-9354-38a2873b37a7-kube-api-access-vh9cg" (OuterVolumeSpecName: "kube-api-access-vh9cg") pod "a1bdf679-e9e9-453d-9354-38a2873b37a7" (UID: "a1bdf679-e9e9-453d-9354-38a2873b37a7"). InnerVolumeSpecName "kube-api-access-vh9cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.946039 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-config-data" (OuterVolumeSpecName: "config-data") pod "a1bdf679-e9e9-453d-9354-38a2873b37a7" (UID: "a1bdf679-e9e9-453d-9354-38a2873b37a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.964060 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.998188 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1bdf679-e9e9-453d-9354-38a2873b37a7" (UID: "a1bdf679-e9e9-453d-9354-38a2873b37a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:21 crc kubenswrapper[4727]: I1210 15:00:21.998277 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.015133 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh9cg\" (UniqueName: \"kubernetes.io/projected/a1bdf679-e9e9-453d-9354-38a2873b37a7-kube-api-access-vh9cg\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.015189 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.015201 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bdf679-e9e9-453d-9354-38a2873b37a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.032626 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:22 crc kubenswrapper[4727]: E1210 15:00:22.033704 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e777d780-3582-474d-997c-6bb3f1b108da" containerName="nova-manage" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.033739 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e777d780-3582-474d-997c-6bb3f1b108da" containerName="nova-manage" Dec 10 15:00:22 crc kubenswrapper[4727]: E1210 15:00:22.033776 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bdf679-e9e9-453d-9354-38a2873b37a7" containerName="nova-scheduler-scheduler" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.033784 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bdf679-e9e9-453d-9354-38a2873b37a7" containerName="nova-scheduler-scheduler" Dec 10 15:00:22 crc kubenswrapper[4727]: E1210 15:00:22.033814 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d94959d-f645-4e03-b152-bbd85cf1a91c" containerName="dnsmasq-dns" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.033821 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d94959d-f645-4e03-b152-bbd85cf1a91c" containerName="dnsmasq-dns" Dec 10 15:00:22 crc kubenswrapper[4727]: E1210 15:00:22.033873 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d94959d-f645-4e03-b152-bbd85cf1a91c" containerName="init" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.033881 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d94959d-f645-4e03-b152-bbd85cf1a91c" containerName="init" Dec 10 15:00:22 crc kubenswrapper[4727]: E1210 15:00:22.033897 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerName="nova-api-log" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.033926 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerName="nova-api-log" Dec 10 15:00:22 crc kubenswrapper[4727]: E1210 15:00:22.033962 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ce4724-56ea-4220-ab50-7f83b039cd49" containerName="collect-profiles" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.033971 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ce4724-56ea-4220-ab50-7f83b039cd49" containerName="collect-profiles" Dec 10 15:00:22 crc kubenswrapper[4727]: E1210 15:00:22.034036 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerName="nova-api-api" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.034044 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerName="nova-api-api" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.034793 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerName="nova-api-log" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.034891 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e777d780-3582-474d-997c-6bb3f1b108da" containerName="nova-manage" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.034937 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ce4724-56ea-4220-ab50-7f83b039cd49" containerName="collect-profiles" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.034956 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d94959d-f645-4e03-b152-bbd85cf1a91c" containerName="dnsmasq-dns" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.034978 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" containerName="nova-api-api" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.035005 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1bdf679-e9e9-453d-9354-38a2873b37a7" containerName="nova-scheduler-scheduler" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.039824 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.046155 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.095128 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.119818 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29jbh\" (UniqueName: \"kubernetes.io/projected/21895a9a-4f96-4f38-b042-8fe46be851d3-kube-api-access-29jbh\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.120155 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-config-data\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.120251 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21895a9a-4f96-4f38-b042-8fe46be851d3-logs\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.120367 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.224221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.224478 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29jbh\" (UniqueName: \"kubernetes.io/projected/21895a9a-4f96-4f38-b042-8fe46be851d3-kube-api-access-29jbh\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.224542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-config-data\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.224574 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21895a9a-4f96-4f38-b042-8fe46be851d3-logs\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.225757 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21895a9a-4f96-4f38-b042-8fe46be851d3-logs\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.243376 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.243781 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-config-data\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.271601 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29jbh\" (UniqueName: \"kubernetes.io/projected/21895a9a-4f96-4f38-b042-8fe46be851d3-kube-api-access-29jbh\") pod \"nova-api-0\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.367356 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.493008 4727 generic.go:334] "Generic (PLEG): container finished" podID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerID="537430c4bfd1f23b4eac69e6e4d88bdfdb8b359c1070e771aebfd586ac7081cc" exitCode=0 Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.494093 4727 generic.go:334] "Generic (PLEG): container finished" podID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerID="2fc76156b8e222213a04f77efb3d1ac4745eeeab89fa55299145536b3dafa3b9" exitCode=143 Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.493999 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc","Type":"ContainerDied","Data":"537430c4bfd1f23b4eac69e6e4d88bdfdb8b359c1070e771aebfd586ac7081cc"} Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.494176 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc","Type":"ContainerDied","Data":"2fc76156b8e222213a04f77efb3d1ac4745eeeab89fa55299145536b3dafa3b9"} Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.496369 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.607128 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11f6fb2-e7ac-4f4d-ae26-6f0266483cab" path="/var/lib/kubelet/pods/b11f6fb2-e7ac-4f4d-ae26-6f0266483cab/volumes" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.607970 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.612768 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.664984 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.667141 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.671344 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.681360 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.750513 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7cln\" (UniqueName: \"kubernetes.io/projected/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-kube-api-access-r7cln\") pod \"nova-scheduler-0\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.750631 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-config-data\") pod \"nova-scheduler-0\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.750707 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.854263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.854572 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7cln\" (UniqueName: \"kubernetes.io/projected/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-kube-api-access-r7cln\") pod \"nova-scheduler-0\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.856698 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-config-data\") pod \"nova-scheduler-0\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.861029 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.861272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-config-data\") pod \"nova-scheduler-0\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.872734 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7cln\" (UniqueName: \"kubernetes.io/projected/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-kube-api-access-r7cln\") pod \"nova-scheduler-0\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " pod="openstack/nova-scheduler-0" Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.950639 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:22 crc kubenswrapper[4727]: I1210 15:00:22.997832 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.389327 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.474119 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-config-data\") pod \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.474237 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-logs\") pod \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.474263 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-combined-ca-bundle\") pod \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.474310 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6gjp\" (UniqueName: \"kubernetes.io/projected/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-kube-api-access-s6gjp\") pod \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\" (UID: \"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc\") " Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.475302 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-logs" (OuterVolumeSpecName: "logs") pod "ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" (UID: "ab349abb-073e-4d8c-aa3c-d2ca6f7379dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.489192 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-kube-api-access-s6gjp" (OuterVolumeSpecName: "kube-api-access-s6gjp") pod "ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" (UID: "ab349abb-073e-4d8c-aa3c-d2ca6f7379dc"). InnerVolumeSpecName "kube-api-access-s6gjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.720774 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.720811 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6gjp\" (UniqueName: \"kubernetes.io/projected/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-kube-api-access-s6gjp\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.731247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab349abb-073e-4d8c-aa3c-d2ca6f7379dc","Type":"ContainerDied","Data":"0b10bb5d55d300aeeb220436835978268085e05ec8dff5db2bb285117ba25d67"} Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.731430 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.731617 4727 scope.go:117] "RemoveContainer" containerID="537430c4bfd1f23b4eac69e6e4d88bdfdb8b359c1070e771aebfd586ac7081cc" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.733260 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21895a9a-4f96-4f38-b042-8fe46be851d3","Type":"ContainerStarted","Data":"0d68264c23e619321e7dbd78ec36af4a847e780c1c49ed99536d6bf07dbb7f01"} Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.733303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21895a9a-4f96-4f38-b042-8fe46be851d3","Type":"ContainerStarted","Data":"66e053d483b0a6a0b737389ec71ca5c58f3e7aaca4e89a47f31099701ec8b60e"} Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.769674 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" (UID: "ab349abb-073e-4d8c-aa3c-d2ca6f7379dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.785399 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-config-data" (OuterVolumeSpecName: "config-data") pod "ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" (UID: "ab349abb-073e-4d8c-aa3c-d2ca6f7379dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.812206 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.822892 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:23 crc kubenswrapper[4727]: I1210 15:00:23.824183 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.310486 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.358194 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.372356 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:00:24 crc kubenswrapper[4727]: E1210 15:00:24.373504 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerName="nova-metadata-metadata" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.373533 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerName="nova-metadata-metadata" Dec 10 15:00:24 crc kubenswrapper[4727]: E1210 15:00:24.373548 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerName="nova-metadata-log" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.373554 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerName="nova-metadata-log" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.373837 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerName="nova-metadata-log" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.373872 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" containerName="nova-metadata-metadata" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.375166 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.384135 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.386860 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.387195 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.421790 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmcpf\" (UniqueName: \"kubernetes.io/projected/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-kube-api-access-rmcpf\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.421957 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-config-data\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.422212 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.422298 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.422388 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-logs\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.524514 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-config-data\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.524624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.524672 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.524700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-logs\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.524749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmcpf\" (UniqueName: \"kubernetes.io/projected/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-kube-api-access-rmcpf\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.527857 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-logs\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.536505 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.541796 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.542968 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-config-data\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.544527 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmcpf\" (UniqueName: \"kubernetes.io/projected/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-kube-api-access-rmcpf\") pod \"nova-metadata-0\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.576137 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1bdf679-e9e9-453d-9354-38a2873b37a7" path="/var/lib/kubelet/pods/a1bdf679-e9e9-453d-9354-38a2873b37a7/volumes" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.577067 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab349abb-073e-4d8c-aa3c-d2ca6f7379dc" path="/var/lib/kubelet/pods/ab349abb-073e-4d8c-aa3c-d2ca6f7379dc/volumes" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.715111 4727 scope.go:117] "RemoveContainer" containerID="2fc76156b8e222213a04f77efb3d1ac4745eeeab89fa55299145536b3dafa3b9" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.715943 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:00:24 crc kubenswrapper[4727]: I1210 15:00:24.781844 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d","Type":"ContainerStarted","Data":"3402f6d60114e046f5fab7af27ce797c16fc6800f9c537733e8af288fe2ef829"} Dec 10 15:00:26 crc kubenswrapper[4727]: I1210 15:00:25.950927 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:00:26 crc kubenswrapper[4727]: W1210 15:00:26.122240 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f9a6e9b_5b12_4812_9d39_3ac67dfe6bb7.slice/crio-4e1525611ca8b7e52e3e966f3e68d8f00b64a2dd389241853413b64ac87bc5c6 WatchSource:0}: Error finding container 4e1525611ca8b7e52e3e966f3e68d8f00b64a2dd389241853413b64ac87bc5c6: Status 404 returned error can't find the container with id 4e1525611ca8b7e52e3e966f3e68d8f00b64a2dd389241853413b64ac87bc5c6 Dec 10 15:00:26 crc kubenswrapper[4727]: I1210 15:00:26.904363 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7","Type":"ContainerStarted","Data":"4e1525611ca8b7e52e3e966f3e68d8f00b64a2dd389241853413b64ac87bc5c6"} Dec 10 15:00:26 crc kubenswrapper[4727]: I1210 15:00:26.907015 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d","Type":"ContainerStarted","Data":"f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b"} Dec 10 15:00:27 crc kubenswrapper[4727]: I1210 15:00:27.931106 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerStarted","Data":"d3a749fc15d281c6fa8c86fa7be3076db686710d8208fde95c54e7d31c99b7cb"} Dec 10 15:00:27 crc kubenswrapper[4727]: I1210 15:00:27.933640 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21895a9a-4f96-4f38-b042-8fe46be851d3","Type":"ContainerStarted","Data":"d140cbbca78f26eccc09e986ffa914d8d618cb139bb3d4ba75e9f5078f134ac2"} Dec 10 15:00:27 crc kubenswrapper[4727]: I1210 15:00:27.937419 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7","Type":"ContainerStarted","Data":"cd87fe242f8051ea4656d6681f0e0ca849089acbbe2003c5ca2494ffa8c68679"} Dec 10 15:00:27 crc kubenswrapper[4727]: I1210 15:00:27.962055 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=6.962028728 podStartE2EDuration="6.962028728s" podCreationTimestamp="2025-12-10 15:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:27.959298169 +0000 UTC m=+1732.154072711" watchObservedRunningTime="2025-12-10 15:00:27.962028728 +0000 UTC m=+1732.156803270" Dec 10 15:00:27 crc kubenswrapper[4727]: I1210 15:00:27.988932 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.988912818 podStartE2EDuration="5.988912818s" podCreationTimestamp="2025-12-10 15:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:27.980229628 +0000 UTC m=+1732.175004170" watchObservedRunningTime="2025-12-10 15:00:27.988912818 +0000 UTC m=+1732.183687360" Dec 10 15:00:27 crc kubenswrapper[4727]: I1210 15:00:27.998420 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 15:00:28 crc kubenswrapper[4727]: I1210 15:00:28.960769 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7","Type":"ContainerStarted","Data":"c0f245fd005e58c95d1f653f4388c2c65025cbeffb46e8abd8107e40428bfc2f"} Dec 10 15:00:32 crc kubenswrapper[4727]: I1210 15:00:32.025328 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=8.025305366 podStartE2EDuration="8.025305366s" podCreationTimestamp="2025-12-10 15:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:32.016244047 +0000 UTC m=+1736.211018599" watchObservedRunningTime="2025-12-10 15:00:32.025305366 +0000 UTC m=+1736.220079908" Dec 10 15:00:32 crc kubenswrapper[4727]: I1210 15:00:32.438156 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:00:32 crc kubenswrapper[4727]: I1210 15:00:32.438211 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:00:32 crc kubenswrapper[4727]: I1210 15:00:32.999132 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 15:00:33 crc kubenswrapper[4727]: I1210 15:00:33.058654 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 15:00:33 crc kubenswrapper[4727]: I1210 15:00:33.519199 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:00:33 crc kubenswrapper[4727]: I1210 15:00:33.519201 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:00:34 crc kubenswrapper[4727]: I1210 15:00:34.061792 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 15:00:34 crc kubenswrapper[4727]: I1210 15:00:34.716397 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:00:34 crc kubenswrapper[4727]: I1210 15:00:34.716960 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:00:34 crc kubenswrapper[4727]: I1210 15:00:34.717047 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:00:34 crc kubenswrapper[4727]: I1210 15:00:34.717162 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:00:35 crc kubenswrapper[4727]: I1210 15:00:35.739152 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:00:35 crc kubenswrapper[4727]: I1210 15:00:35.900220 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:00:37 crc kubenswrapper[4727]: I1210 15:00:37.724587 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:00:37 crc kubenswrapper[4727]: I1210 15:00:37.725133 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:00:37 crc kubenswrapper[4727]: I1210 15:00:37.725181 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:00:37 crc kubenswrapper[4727]: I1210 15:00:37.726001 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:00:37 crc kubenswrapper[4727]: I1210 15:00:37.726055 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" gracePeriod=600 Dec 10 15:00:38 crc kubenswrapper[4727]: I1210 15:00:38.073800 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerStarted","Data":"b0971705c837ba1755857e6caa5b76f82e0265ca87fab1c6c9e138470f145e21"} Dec 10 15:00:38 crc kubenswrapper[4727]: I1210 15:00:38.074096 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:00:38 crc kubenswrapper[4727]: I1210 15:00:38.107304 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.488191841 podStartE2EDuration="44.107282061s" podCreationTimestamp="2025-12-10 14:59:54 +0000 UTC" firstStartedPulling="2025-12-10 14:59:56.283097389 +0000 UTC m=+1700.477871931" lastFinishedPulling="2025-12-10 15:00:35.902187609 +0000 UTC m=+1740.096962151" observedRunningTime="2025-12-10 15:00:38.104590363 +0000 UTC m=+1742.299364905" watchObservedRunningTime="2025-12-10 15:00:38.107282061 +0000 UTC m=+1742.302056613" Dec 10 15:00:39 crc kubenswrapper[4727]: I1210 15:00:39.092338 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" exitCode=0 Dec 10 15:00:39 crc kubenswrapper[4727]: I1210 15:00:39.092406 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212"} Dec 10 15:00:39 crc kubenswrapper[4727]: I1210 15:00:39.092466 4727 scope.go:117] "RemoveContainer" containerID="9b2d38dbef40e687b846e527a023d87ca5607929b3a7329d3334ac26ab387fb1" Dec 10 15:00:39 crc kubenswrapper[4727]: E1210 15:00:39.824266 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:00:40 crc kubenswrapper[4727]: I1210 15:00:40.101686 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:00:40 crc kubenswrapper[4727]: E1210 15:00:40.102018 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:00:42 crc kubenswrapper[4727]: I1210 15:00:42.372009 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:00:42 crc kubenswrapper[4727]: I1210 15:00:42.372498 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:00:42 crc kubenswrapper[4727]: I1210 15:00:42.372676 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:00:42 crc kubenswrapper[4727]: I1210 15:00:42.375266 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.130397 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.133468 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.419077 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mmbg5"] Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.433140 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.445790 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mmbg5"] Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.857375 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.857785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-config\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.857824 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.858355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.858782 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.858866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl5lt\" (UniqueName: \"kubernetes.io/projected/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-kube-api-access-kl5lt\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.961261 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.961428 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-config\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.961461 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.961552 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.961683 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.961713 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl5lt\" (UniqueName: \"kubernetes.io/projected/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-kube-api-access-kl5lt\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.963157 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.963355 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-config\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.963957 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.964436 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:43 crc kubenswrapper[4727]: I1210 15:00:43.965285 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.009013 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl5lt\" (UniqueName: \"kubernetes.io/projected/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-kube-api-access-kl5lt\") pod \"dnsmasq-dns-5fd9b586ff-mmbg5\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.096851 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.157005 4727 generic.go:334] "Generic (PLEG): container finished" podID="b3180cdb-857d-4bc1-9b84-a872dfa84cfe" containerID="4bb6fac66e46b198a30cd0223acffe3f1b3fc68dd21e8f4234a120027dcedcf0" exitCode=137 Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.157085 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3180cdb-857d-4bc1-9b84-a872dfa84cfe","Type":"ContainerDied","Data":"4bb6fac66e46b198a30cd0223acffe3f1b3fc68dd21e8f4234a120027dcedcf0"} Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.549948 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.689064 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-448s9\" (UniqueName: \"kubernetes.io/projected/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-kube-api-access-448s9\") pod \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.689256 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-combined-ca-bundle\") pod \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.689417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-config-data\") pod \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\" (UID: \"b3180cdb-857d-4bc1-9b84-a872dfa84cfe\") " Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.704616 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-kube-api-access-448s9" (OuterVolumeSpecName: "kube-api-access-448s9") pod "b3180cdb-857d-4bc1-9b84-a872dfa84cfe" (UID: "b3180cdb-857d-4bc1-9b84-a872dfa84cfe"). InnerVolumeSpecName "kube-api-access-448s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.743226 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.753145 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3180cdb-857d-4bc1-9b84-a872dfa84cfe" (UID: "b3180cdb-857d-4bc1-9b84-a872dfa84cfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.753870 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.795752 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-448s9\" (UniqueName: \"kubernetes.io/projected/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-kube-api-access-448s9\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.795789 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.802717 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-config-data" (OuterVolumeSpecName: "config-data") pod "b3180cdb-857d-4bc1-9b84-a872dfa84cfe" (UID: "b3180cdb-857d-4bc1-9b84-a872dfa84cfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.812194 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:00:44 crc kubenswrapper[4727]: I1210 15:00:44.897706 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3180cdb-857d-4bc1-9b84-a872dfa84cfe-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.004825 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mmbg5"] Dec 10 15:00:45 crc kubenswrapper[4727]: W1210 15:00:45.005063 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46dda6a1_0c7e_4ea5_9ec5_aab327344ace.slice/crio-8ab55a4ef19e5fc6f844421de274ca36ca859a192200c475cbf3c9f6e26ad92f WatchSource:0}: Error finding container 8ab55a4ef19e5fc6f844421de274ca36ca859a192200c475cbf3c9f6e26ad92f: Status 404 returned error can't find the container with id 8ab55a4ef19e5fc6f844421de274ca36ca859a192200c475cbf3c9f6e26ad92f Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.178983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" event={"ID":"46dda6a1-0c7e-4ea5-9ec5-aab327344ace","Type":"ContainerStarted","Data":"8ab55a4ef19e5fc6f844421de274ca36ca859a192200c475cbf3c9f6e26ad92f"} Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.181967 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.182010 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3180cdb-857d-4bc1-9b84-a872dfa84cfe","Type":"ContainerDied","Data":"4196537acd17c073f4b6f2fa8dcbb59a1b6b4dd9db0741d8c157838a8d130a9c"} Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.182087 4727 scope.go:117] "RemoveContainer" containerID="4bb6fac66e46b198a30cd0223acffe3f1b3fc68dd21e8f4234a120027dcedcf0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.234504 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.246895 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.247939 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.268021 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:00:45 crc kubenswrapper[4727]: E1210 15:00:45.268693 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3180cdb-857d-4bc1-9b84-a872dfa84cfe" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.268717 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3180cdb-857d-4bc1-9b84-a872dfa84cfe" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.269014 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3180cdb-857d-4bc1-9b84-a872dfa84cfe" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.269954 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.272079 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.275897 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.277273 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.293338 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.695618 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.695679 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zbs6\" (UniqueName: \"kubernetes.io/projected/fd28088f-9229-4753-a618-7b948ab06773-kube-api-access-8zbs6\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.695737 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.695794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.695882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.799699 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zbs6\" (UniqueName: \"kubernetes.io/projected/fd28088f-9229-4753-a618-7b948ab06773-kube-api-access-8zbs6\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.799833 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.800655 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.800872 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.801283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.806796 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.816884 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.825866 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.834055 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd28088f-9229-4753-a618-7b948ab06773-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.861022 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zbs6\" (UniqueName: \"kubernetes.io/projected/fd28088f-9229-4753-a618-7b948ab06773-kube-api-access-8zbs6\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd28088f-9229-4753-a618-7b948ab06773\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:45 crc kubenswrapper[4727]: I1210 15:00:45.893272 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:46 crc kubenswrapper[4727]: I1210 15:00:46.198149 4727 generic.go:334] "Generic (PLEG): container finished" podID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" containerID="a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25" exitCode=0 Dec 10 15:00:46 crc kubenswrapper[4727]: I1210 15:00:46.198512 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" event={"ID":"46dda6a1-0c7e-4ea5-9ec5-aab327344ace","Type":"ContainerDied","Data":"a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25"} Dec 10 15:00:46 crc kubenswrapper[4727]: I1210 15:00:46.523898 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:00:46 crc kubenswrapper[4727]: I1210 15:00:46.587168 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3180cdb-857d-4bc1-9b84-a872dfa84cfe" path="/var/lib/kubelet/pods/b3180cdb-857d-4bc1-9b84-a872dfa84cfe/volumes" Dec 10 15:00:47 crc kubenswrapper[4727]: I1210 15:00:47.280550 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fd28088f-9229-4753-a618-7b948ab06773","Type":"ContainerStarted","Data":"5e06ff688ab59cbd1f0a49e1e49360951af18cf9119e7ec4d4d59bc49fafaf15"} Dec 10 15:00:47 crc kubenswrapper[4727]: I1210 15:00:47.994887 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:47 crc kubenswrapper[4727]: I1210 15:00:47.995534 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-log" containerID="cri-o://0d68264c23e619321e7dbd78ec36af4a847e780c1c49ed99536d6bf07dbb7f01" gracePeriod=30 Dec 10 15:00:47 crc kubenswrapper[4727]: I1210 15:00:47.995660 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-api" containerID="cri-o://d140cbbca78f26eccc09e986ffa914d8d618cb139bb3d4ba75e9f5078f134ac2" gracePeriod=30 Dec 10 15:00:48 crc kubenswrapper[4727]: I1210 15:00:48.293642 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fd28088f-9229-4753-a618-7b948ab06773","Type":"ContainerStarted","Data":"3cab841b1c2d9b3803cc1b94e43ab9370184c6060207bb9e363a6bd5ca07d586"} Dec 10 15:00:48 crc kubenswrapper[4727]: I1210 15:00:48.296393 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" event={"ID":"46dda6a1-0c7e-4ea5-9ec5-aab327344ace","Type":"ContainerStarted","Data":"a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4"} Dec 10 15:00:48 crc kubenswrapper[4727]: I1210 15:00:48.296574 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:48 crc kubenswrapper[4727]: I1210 15:00:48.298560 4727 generic.go:334] "Generic (PLEG): container finished" podID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerID="0d68264c23e619321e7dbd78ec36af4a847e780c1c49ed99536d6bf07dbb7f01" exitCode=143 Dec 10 15:00:48 crc kubenswrapper[4727]: I1210 15:00:48.298606 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21895a9a-4f96-4f38-b042-8fe46be851d3","Type":"ContainerDied","Data":"0d68264c23e619321e7dbd78ec36af4a847e780c1c49ed99536d6bf07dbb7f01"} Dec 10 15:00:48 crc kubenswrapper[4727]: I1210 15:00:48.327271 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.327243825 podStartE2EDuration="3.327243825s" podCreationTimestamp="2025-12-10 15:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:48.31834963 +0000 UTC m=+1752.513124172" watchObservedRunningTime="2025-12-10 15:00:48.327243825 +0000 UTC m=+1752.522018367" Dec 10 15:00:48 crc kubenswrapper[4727]: I1210 15:00:48.343652 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" podStartSLOduration=5.343620409 podStartE2EDuration="5.343620409s" podCreationTimestamp="2025-12-10 15:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:48.338410807 +0000 UTC m=+1752.533185349" watchObservedRunningTime="2025-12-10 15:00:48.343620409 +0000 UTC m=+1752.538394951" Dec 10 15:00:51 crc kubenswrapper[4727]: I1210 15:00:51.110210 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:51 crc kubenswrapper[4727]: I1210 15:00:51.905225 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:00:51 crc kubenswrapper[4727]: I1210 15:00:51.905860 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="ceilometer-central-agent" containerID="cri-o://d73999af619850b6c8a9f166583e763b52f1612f86dbbe752bca052eb1da1c5f" gracePeriod=30 Dec 10 15:00:51 crc kubenswrapper[4727]: I1210 15:00:51.905922 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="sg-core" containerID="cri-o://d3a749fc15d281c6fa8c86fa7be3076db686710d8208fde95c54e7d31c99b7cb" gracePeriod=30 Dec 10 15:00:51 crc kubenswrapper[4727]: I1210 15:00:51.905985 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="ceilometer-notification-agent" containerID="cri-o://63881adbe210ada6fb903b19a48b9c7cadb46d53decce91a0c1b6a9f1fab6f1a" gracePeriod=30 Dec 10 15:00:51 crc kubenswrapper[4727]: I1210 15:00:51.906022 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="proxy-httpd" containerID="cri-o://b0971705c837ba1755857e6caa5b76f82e0265ca87fab1c6c9e138470f145e21" gracePeriod=30 Dec 10 15:00:51 crc kubenswrapper[4727]: I1210 15:00:51.915504 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.212:3000/\": EOF" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.389025 4727 generic.go:334] "Generic (PLEG): container finished" podID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerID="b0971705c837ba1755857e6caa5b76f82e0265ca87fab1c6c9e138470f145e21" exitCode=0 Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.389355 4727 generic.go:334] "Generic (PLEG): container finished" podID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerID="d3a749fc15d281c6fa8c86fa7be3076db686710d8208fde95c54e7d31c99b7cb" exitCode=2 Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.389201 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerDied","Data":"b0971705c837ba1755857e6caa5b76f82e0265ca87fab1c6c9e138470f145e21"} Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.389490 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerDied","Data":"d3a749fc15d281c6fa8c86fa7be3076db686710d8208fde95c54e7d31c99b7cb"} Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.392459 4727 generic.go:334] "Generic (PLEG): container finished" podID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerID="d140cbbca78f26eccc09e986ffa914d8d618cb139bb3d4ba75e9f5078f134ac2" exitCode=0 Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.392518 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21895a9a-4f96-4f38-b042-8fe46be851d3","Type":"ContainerDied","Data":"d140cbbca78f26eccc09e986ffa914d8d618cb139bb3d4ba75e9f5078f134ac2"} Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.497687 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.562446 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21895a9a-4f96-4f38-b042-8fe46be851d3-logs\") pod \"21895a9a-4f96-4f38-b042-8fe46be851d3\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.562662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-config-data\") pod \"21895a9a-4f96-4f38-b042-8fe46be851d3\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.562819 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29jbh\" (UniqueName: \"kubernetes.io/projected/21895a9a-4f96-4f38-b042-8fe46be851d3-kube-api-access-29jbh\") pod \"21895a9a-4f96-4f38-b042-8fe46be851d3\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.562875 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-combined-ca-bundle\") pod \"21895a9a-4f96-4f38-b042-8fe46be851d3\" (UID: \"21895a9a-4f96-4f38-b042-8fe46be851d3\") " Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.563348 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21895a9a-4f96-4f38-b042-8fe46be851d3-logs" (OuterVolumeSpecName: "logs") pod "21895a9a-4f96-4f38-b042-8fe46be851d3" (UID: "21895a9a-4f96-4f38-b042-8fe46be851d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.564651 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:00:52 crc kubenswrapper[4727]: E1210 15:00:52.565558 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.565599 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21895a9a-4f96-4f38-b042-8fe46be851d3-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.576051 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21895a9a-4f96-4f38-b042-8fe46be851d3-kube-api-access-29jbh" (OuterVolumeSpecName: "kube-api-access-29jbh") pod "21895a9a-4f96-4f38-b042-8fe46be851d3" (UID: "21895a9a-4f96-4f38-b042-8fe46be851d3"). InnerVolumeSpecName "kube-api-access-29jbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.619275 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-config-data" (OuterVolumeSpecName: "config-data") pod "21895a9a-4f96-4f38-b042-8fe46be851d3" (UID: "21895a9a-4f96-4f38-b042-8fe46be851d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.621238 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21895a9a-4f96-4f38-b042-8fe46be851d3" (UID: "21895a9a-4f96-4f38-b042-8fe46be851d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.667149 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.667184 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29jbh\" (UniqueName: \"kubernetes.io/projected/21895a9a-4f96-4f38-b042-8fe46be851d3-kube-api-access-29jbh\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:52 crc kubenswrapper[4727]: I1210 15:00:52.667196 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21895a9a-4f96-4f38-b042-8fe46be851d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.452152 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21895a9a-4f96-4f38-b042-8fe46be851d3","Type":"ContainerDied","Data":"66e053d483b0a6a0b737389ec71ca5c58f3e7aaca4e89a47f31099701ec8b60e"} Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.452486 4727 scope.go:117] "RemoveContainer" containerID="d140cbbca78f26eccc09e986ffa914d8d618cb139bb3d4ba75e9f5078f134ac2" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.452658 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.492274 4727 generic.go:334] "Generic (PLEG): container finished" podID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerID="d73999af619850b6c8a9f166583e763b52f1612f86dbbe752bca052eb1da1c5f" exitCode=0 Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.492330 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerDied","Data":"d73999af619850b6c8a9f166583e763b52f1612f86dbbe752bca052eb1da1c5f"} Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.529995 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.533719 4727 scope.go:117] "RemoveContainer" containerID="0d68264c23e619321e7dbd78ec36af4a847e780c1c49ed99536d6bf07dbb7f01" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.547537 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.600477 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:53 crc kubenswrapper[4727]: E1210 15:00:53.601091 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-api" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.601115 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-api" Dec 10 15:00:53 crc kubenswrapper[4727]: E1210 15:00:53.601153 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-log" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.601163 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-log" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.601457 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-api" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.601494 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-log" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.604133 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.611784 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.616741 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.621212 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.631059 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.799379 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.799469 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.799502 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8t68\" (UniqueName: \"kubernetes.io/projected/e4076258-0b0c-4084-ad95-a75be0238578-kube-api-access-s8t68\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.799559 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4076258-0b0c-4084-ad95-a75be0238578-logs\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.799618 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-config-data\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.799674 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-public-tls-certs\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.903883 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.904008 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.904036 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8t68\" (UniqueName: \"kubernetes.io/projected/e4076258-0b0c-4084-ad95-a75be0238578-kube-api-access-s8t68\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.904091 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4076258-0b0c-4084-ad95-a75be0238578-logs\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.904143 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-config-data\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.904217 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-public-tls-certs\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.910519 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4076258-0b0c-4084-ad95-a75be0238578-logs\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.921530 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-config-data\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.921558 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.921985 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-public-tls-certs\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.928762 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:53 crc kubenswrapper[4727]: I1210 15:00:53.940773 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8t68\" (UniqueName: \"kubernetes.io/projected/e4076258-0b0c-4084-ad95-a75be0238578-kube-api-access-s8t68\") pod \"nova-api-0\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " pod="openstack/nova-api-0" Dec 10 15:00:54 crc kubenswrapper[4727]: I1210 15:00:54.098480 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:00:54 crc kubenswrapper[4727]: I1210 15:00:54.186852 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-lvfng"] Dec 10 15:00:54 crc kubenswrapper[4727]: I1210 15:00:54.187403 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-lvfng" podUID="13fd83eb-02fc-4d4a-938c-0c363d300ae6" containerName="dnsmasq-dns" containerID="cri-o://89f32ed4f135efb6c1c8cd2e48718ff1c97bb00aeea6bb188d939cadf69f5d63" gracePeriod=10 Dec 10 15:00:54 crc kubenswrapper[4727]: I1210 15:00:54.227979 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:00:54 crc kubenswrapper[4727]: I1210 15:00:54.702331 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.212:3000/\": dial tcp 10.217.0.212:3000: connect: connection refused" Dec 10 15:00:54 crc kubenswrapper[4727]: I1210 15:00:54.727531 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" path="/var/lib/kubelet/pods/21895a9a-4f96-4f38-b042-8fe46be851d3/volumes" Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.163861 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.782839 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4076258-0b0c-4084-ad95-a75be0238578","Type":"ContainerStarted","Data":"aea51930c49240ad76b6dff47b4bbc871989fec9e59473f28ee09246ba762f94"} Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.783394 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4076258-0b0c-4084-ad95-a75be0238578","Type":"ContainerStarted","Data":"8340c7c5411c949d41495472854f537de9f73a3f1f15cfd58887998d5a5411af"} Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.786801 4727 generic.go:334] "Generic (PLEG): container finished" podID="13fd83eb-02fc-4d4a-938c-0c363d300ae6" containerID="89f32ed4f135efb6c1c8cd2e48718ff1c97bb00aeea6bb188d939cadf69f5d63" exitCode=0 Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.786860 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-lvfng" event={"ID":"13fd83eb-02fc-4d4a-938c-0c363d300ae6","Type":"ContainerDied","Data":"89f32ed4f135efb6c1c8cd2e48718ff1c97bb00aeea6bb188d939cadf69f5d63"} Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.893019 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.893639 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.929275 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.974666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-swift-storage-0\") pod \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.974840 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-config\") pod \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.974957 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-svc\") pod \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.975003 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45b7z\" (UniqueName: \"kubernetes.io/projected/13fd83eb-02fc-4d4a-938c-0c363d300ae6-kube-api-access-45b7z\") pod \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.975042 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-nb\") pod \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.975069 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-sb\") pod \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\" (UID: \"13fd83eb-02fc-4d4a-938c-0c363d300ae6\") " Dec 10 15:00:55 crc kubenswrapper[4727]: I1210 15:00:55.985562 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13fd83eb-02fc-4d4a-938c-0c363d300ae6-kube-api-access-45b7z" (OuterVolumeSpecName: "kube-api-access-45b7z") pod "13fd83eb-02fc-4d4a-938c-0c363d300ae6" (UID: "13fd83eb-02fc-4d4a-938c-0c363d300ae6"). InnerVolumeSpecName "kube-api-access-45b7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.067675 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13fd83eb-02fc-4d4a-938c-0c363d300ae6" (UID: "13fd83eb-02fc-4d4a-938c-0c363d300ae6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.085547 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-config" (OuterVolumeSpecName: "config") pod "13fd83eb-02fc-4d4a-938c-0c363d300ae6" (UID: "13fd83eb-02fc-4d4a-938c-0c363d300ae6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.090775 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.090821 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45b7z\" (UniqueName: \"kubernetes.io/projected/13fd83eb-02fc-4d4a-938c-0c363d300ae6-kube-api-access-45b7z\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.090835 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.095685 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13fd83eb-02fc-4d4a-938c-0c363d300ae6" (UID: "13fd83eb-02fc-4d4a-938c-0c363d300ae6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.123852 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13fd83eb-02fc-4d4a-938c-0c363d300ae6" (UID: "13fd83eb-02fc-4d4a-938c-0c363d300ae6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.574782 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.574812 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.651826 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13fd83eb-02fc-4d4a-938c-0c363d300ae6" (UID: "13fd83eb-02fc-4d4a-938c-0c363d300ae6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.680758 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13fd83eb-02fc-4d4a-938c-0c363d300ae6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.801694 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-lvfng" event={"ID":"13fd83eb-02fc-4d4a-938c-0c363d300ae6","Type":"ContainerDied","Data":"a39b1f24f069be2e1a016ba8b6076c9585e5bcd3dcd65c0c2dad1b96ed89308b"} Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.801759 4727 scope.go:117] "RemoveContainer" containerID="89f32ed4f135efb6c1c8cd2e48718ff1c97bb00aeea6bb188d939cadf69f5d63" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.801966 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-lvfng" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.813011 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4076258-0b0c-4084-ad95-a75be0238578","Type":"ContainerStarted","Data":"9c7ef0c8d771309efa2212fafd523c9c7966410b28842f7fcd267534ea4c6b2a"} Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.850569 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.851026 4727 scope.go:117] "RemoveContainer" containerID="543588c741b9719e1876f3726d37cfe51c76bc752e3539b493082b6037d25d85" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.868523 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.8684925310000002 podStartE2EDuration="3.868492531s" podCreationTimestamp="2025-12-10 15:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:56.85857503 +0000 UTC m=+1761.053349582" watchObservedRunningTime="2025-12-10 15:00:56.868492531 +0000 UTC m=+1761.063267073" Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.903387 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-lvfng"] Dec 10 15:00:56 crc kubenswrapper[4727]: I1210 15:00:56.934240 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-lvfng"] Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.839681 4727 generic.go:334] "Generic (PLEG): container finished" podID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerID="63881adbe210ada6fb903b19a48b9c7cadb46d53decce91a0c1b6a9f1fab6f1a" exitCode=0 Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.839764 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerDied","Data":"63881adbe210ada6fb903b19a48b9c7cadb46d53decce91a0c1b6a9f1fab6f1a"} Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.840216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949f6395-0c21-4bab-9cc1-6dc83aad5aac","Type":"ContainerDied","Data":"d29f558227f51f4c19e24e8d390a9b465ad4eb3c0bbbf39944f9be4112b7a820"} Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.840284 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29f558227f51f4c19e24e8d390a9b465ad4eb3c0bbbf39944f9be4112b7a820" Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.895344 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.923608 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-sg-core-conf-yaml\") pod \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.923862 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-scripts\") pod \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.923997 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qds5m\" (UniqueName: \"kubernetes.io/projected/949f6395-0c21-4bab-9cc1-6dc83aad5aac-kube-api-access-qds5m\") pod \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.924083 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-config-data\") pod \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.924162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-run-httpd\") pod \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.924207 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-log-httpd\") pod \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.924235 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-combined-ca-bundle\") pod \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\" (UID: \"949f6395-0c21-4bab-9cc1-6dc83aad5aac\") " Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.925179 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "949f6395-0c21-4bab-9cc1-6dc83aad5aac" (UID: "949f6395-0c21-4bab-9cc1-6dc83aad5aac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.926066 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "949f6395-0c21-4bab-9cc1-6dc83aad5aac" (UID: "949f6395-0c21-4bab-9cc1-6dc83aad5aac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.935109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-scripts" (OuterVolumeSpecName: "scripts") pod "949f6395-0c21-4bab-9cc1-6dc83aad5aac" (UID: "949f6395-0c21-4bab-9cc1-6dc83aad5aac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.935359 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949f6395-0c21-4bab-9cc1-6dc83aad5aac-kube-api-access-qds5m" (OuterVolumeSpecName: "kube-api-access-qds5m") pod "949f6395-0c21-4bab-9cc1-6dc83aad5aac" (UID: "949f6395-0c21-4bab-9cc1-6dc83aad5aac"). InnerVolumeSpecName "kube-api-access-qds5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:57 crc kubenswrapper[4727]: I1210 15:00:57.987408 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "949f6395-0c21-4bab-9cc1-6dc83aad5aac" (UID: "949f6395-0c21-4bab-9cc1-6dc83aad5aac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.028406 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.028451 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qds5m\" (UniqueName: \"kubernetes.io/projected/949f6395-0c21-4bab-9cc1-6dc83aad5aac-kube-api-access-qds5m\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.028468 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.028480 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949f6395-0c21-4bab-9cc1-6dc83aad5aac-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.028491 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.092710 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "949f6395-0c21-4bab-9cc1-6dc83aad5aac" (UID: "949f6395-0c21-4bab-9cc1-6dc83aad5aac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.130284 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.132127 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-config-data" (OuterVolumeSpecName: "config-data") pod "949f6395-0c21-4bab-9cc1-6dc83aad5aac" (UID: "949f6395-0c21-4bab-9cc1-6dc83aad5aac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.503170 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949f6395-0c21-4bab-9cc1-6dc83aad5aac-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.578509 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13fd83eb-02fc-4d4a-938c-0c363d300ae6" path="/var/lib/kubelet/pods/13fd83eb-02fc-4d4a-938c-0c363d300ae6/volumes" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.851200 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.884619 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.902104 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.930109 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:00:58 crc kubenswrapper[4727]: E1210 15:00:58.930736 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13fd83eb-02fc-4d4a-938c-0c363d300ae6" containerName="dnsmasq-dns" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.931207 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="13fd83eb-02fc-4d4a-938c-0c363d300ae6" containerName="dnsmasq-dns" Dec 10 15:00:58 crc kubenswrapper[4727]: E1210 15:00:58.931278 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="ceilometer-notification-agent" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.931332 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="ceilometer-notification-agent" Dec 10 15:00:58 crc kubenswrapper[4727]: E1210 15:00:58.931410 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="proxy-httpd" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.931467 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="proxy-httpd" Dec 10 15:00:58 crc kubenswrapper[4727]: E1210 15:00:58.931561 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="sg-core" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.932021 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="sg-core" Dec 10 15:00:58 crc kubenswrapper[4727]: E1210 15:00:58.932134 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13fd83eb-02fc-4d4a-938c-0c363d300ae6" containerName="init" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.932211 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="13fd83eb-02fc-4d4a-938c-0c363d300ae6" containerName="init" Dec 10 15:00:58 crc kubenswrapper[4727]: E1210 15:00:58.932297 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="ceilometer-central-agent" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.932376 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="ceilometer-central-agent" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.932747 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="ceilometer-notification-agent" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.932853 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="ceilometer-central-agent" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.932959 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="proxy-httpd" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.933075 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" containerName="sg-core" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.933164 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="13fd83eb-02fc-4d4a-938c-0c363d300ae6" containerName="dnsmasq-dns" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.936892 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.939835 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.940187 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:00:58 crc kubenswrapper[4727]: I1210 15:00:58.950239 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.117450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-run-httpd\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.117817 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-config-data\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.117961 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.118161 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvnp\" (UniqueName: \"kubernetes.io/projected/be66e59e-7621-4438-8c4c-5d5791a3f3f8-kube-api-access-bkvnp\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.118341 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.118501 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-scripts\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.118642 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-log-httpd\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.220148 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkvnp\" (UniqueName: \"kubernetes.io/projected/be66e59e-7621-4438-8c4c-5d5791a3f3f8-kube-api-access-bkvnp\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.220275 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.220419 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-scripts\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.220468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-log-httpd\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.220608 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-run-httpd\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.220648 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-config-data\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.220674 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.221510 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-run-httpd\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.222099 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-log-httpd\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.226860 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.227427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-config-data\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.229602 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.239536 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-scripts\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.260335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkvnp\" (UniqueName: \"kubernetes.io/projected/be66e59e-7621-4438-8c4c-5d5791a3f3f8-kube-api-access-bkvnp\") pod \"ceilometer-0\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " pod="openstack/ceilometer-0" Dec 10 15:00:59 crc kubenswrapper[4727]: I1210 15:00:59.557006 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.231056 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29422981-hs2nn"] Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.233627 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.238653 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29422981-hs2nn"] Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.254398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-config-data\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.254790 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d5hs\" (UniqueName: \"kubernetes.io/projected/5686911a-63ad-487e-8dc4-c9c833c20f51-kube-api-access-4d5hs\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.255359 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-fernet-keys\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.256171 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-combined-ca-bundle\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.358575 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-combined-ca-bundle\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.358701 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-config-data\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.358883 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d5hs\" (UniqueName: \"kubernetes.io/projected/5686911a-63ad-487e-8dc4-c9c833c20f51-kube-api-access-4d5hs\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.358968 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-fernet-keys\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.375548 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-fernet-keys\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.377927 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-config-data\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.378935 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-combined-ca-bundle\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.380445 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d5hs\" (UniqueName: \"kubernetes.io/projected/5686911a-63ad-487e-8dc4-c9c833c20f51-kube-api-access-4d5hs\") pod \"keystone-cron-29422981-hs2nn\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:00 crc kubenswrapper[4727]: W1210 15:01:00.571552 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe66e59e_7621_4438_8c4c_5d5791a3f3f8.slice/crio-040c2ccc911b40ea2da2c7ed1f4cd1816bc108bf2bb346ddcc3d01bf0eea562f WatchSource:0}: Error finding container 040c2ccc911b40ea2da2c7ed1f4cd1816bc108bf2bb346ddcc3d01bf0eea562f: Status 404 returned error can't find the container with id 040c2ccc911b40ea2da2c7ed1f4cd1816bc108bf2bb346ddcc3d01bf0eea562f Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.594850 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949f6395-0c21-4bab-9cc1-6dc83aad5aac" path="/var/lib/kubelet/pods/949f6395-0c21-4bab-9cc1-6dc83aad5aac/volumes" Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.596524 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:00 crc kubenswrapper[4727]: I1210 15:01:00.634544 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:01 crc kubenswrapper[4727]: I1210 15:01:01.181276 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29422981-hs2nn"] Dec 10 15:01:01 crc kubenswrapper[4727]: I1210 15:01:01.227536 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerStarted","Data":"040c2ccc911b40ea2da2c7ed1f4cd1816bc108bf2bb346ddcc3d01bf0eea562f"} Dec 10 15:01:01 crc kubenswrapper[4727]: I1210 15:01:01.228961 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422981-hs2nn" event={"ID":"5686911a-63ad-487e-8dc4-c9c833c20f51","Type":"ContainerStarted","Data":"7575b75e3c9d8d3d18af20c123b17e4266ded492411aec98442012f03eeaa127"} Dec 10 15:01:02 crc kubenswrapper[4727]: I1210 15:01:02.260213 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422981-hs2nn" event={"ID":"5686911a-63ad-487e-8dc4-c9c833c20f51","Type":"ContainerStarted","Data":"508bd8aa5038c9534013efda2edd480e4d2307a01a650d27badbe598048eba1e"} Dec 10 15:01:02 crc kubenswrapper[4727]: I1210 15:01:02.267392 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerStarted","Data":"fd3850753860bffa8f1f01595dd4e45f9223c5a2265cfc3dc3d355829f67b3fd"} Dec 10 15:01:02 crc kubenswrapper[4727]: I1210 15:01:02.286295 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29422981-hs2nn" podStartSLOduration=2.286274476 podStartE2EDuration="2.286274476s" podCreationTimestamp="2025-12-10 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:02.277105744 +0000 UTC m=+1766.471880306" watchObservedRunningTime="2025-12-10 15:01:02.286274476 +0000 UTC m=+1766.481049018" Dec 10 15:01:04 crc kubenswrapper[4727]: I1210 15:01:04.228960 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:01:04 crc kubenswrapper[4727]: I1210 15:01:04.229320 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:01:04 crc kubenswrapper[4727]: I1210 15:01:04.745752 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:01:04 crc kubenswrapper[4727]: E1210 15:01:04.746062 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:01:05 crc kubenswrapper[4727]: I1210 15:01:05.245536 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:01:05 crc kubenswrapper[4727]: I1210 15:01:05.247095 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:05 crc kubenswrapper[4727]: I1210 15:01:05.247184 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:05 crc kubenswrapper[4727]: I1210 15:01:05.825434 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerStarted","Data":"932455ff6e476d141d22f9cb369e7ac3975796b19874014c52da8e1222504607"} Dec 10 15:01:08 crc kubenswrapper[4727]: I1210 15:01:08.383649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerStarted","Data":"ce34707cd5d751a596804fd58bd973206cc4477ef40f33cc3ef1b748fe5fb58b"} Dec 10 15:01:09 crc kubenswrapper[4727]: I1210 15:01:09.400725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerStarted","Data":"fb6e5a61ff936a6d199bbfcb2b378dc57b41deb874c425f35ac7d9b763fdf02f"} Dec 10 15:01:09 crc kubenswrapper[4727]: I1210 15:01:09.402718 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:01:09 crc kubenswrapper[4727]: I1210 15:01:09.447273 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.045444875 podStartE2EDuration="11.447247996s" podCreationTimestamp="2025-12-10 15:00:58 +0000 UTC" firstStartedPulling="2025-12-10 15:01:00.576865961 +0000 UTC m=+1764.771640503" lastFinishedPulling="2025-12-10 15:01:08.978669082 +0000 UTC m=+1773.173443624" observedRunningTime="2025-12-10 15:01:09.427969138 +0000 UTC m=+1773.622743680" watchObservedRunningTime="2025-12-10 15:01:09.447247996 +0000 UTC m=+1773.642022528" Dec 10 15:01:09 crc kubenswrapper[4727]: I1210 15:01:09.968595 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" podUID="eae9110d-9b14-4360-9e66-60bf84efae12" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:10 crc kubenswrapper[4727]: I1210 15:01:10.013159 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qllqx" podUID="eae9110d-9b14-4360-9e66-60bf84efae12" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:10 crc kubenswrapper[4727]: I1210 15:01:10.418546 4727 generic.go:334] "Generic (PLEG): container finished" podID="5686911a-63ad-487e-8dc4-c9c833c20f51" containerID="508bd8aa5038c9534013efda2edd480e4d2307a01a650d27badbe598048eba1e" exitCode=0 Dec 10 15:01:10 crc kubenswrapper[4727]: I1210 15:01:10.418987 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422981-hs2nn" event={"ID":"5686911a-63ad-487e-8dc4-c9c833c20f51","Type":"ContainerDied","Data":"508bd8aa5038c9534013efda2edd480e4d2307a01a650d27badbe598048eba1e"} Dec 10 15:01:10 crc kubenswrapper[4727]: I1210 15:01:10.421394 4727 generic.go:334] "Generic (PLEG): container finished" podID="f00dd75e-d42e-41fa-93f2-728409ffcb47" containerID="2c30ce2bb5b57642f62585f23a3a6eca2a90960fd0dca6100f3e33d90499b830" exitCode=0 Dec 10 15:01:10 crc kubenswrapper[4727]: I1210 15:01:10.421482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5d4rn" event={"ID":"f00dd75e-d42e-41fa-93f2-728409ffcb47","Type":"ContainerDied","Data":"2c30ce2bb5b57642f62585f23a3a6eca2a90960fd0dca6100f3e33d90499b830"} Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.412268 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.419971 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.515690 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-scripts\") pod \"f00dd75e-d42e-41fa-93f2-728409ffcb47\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.515861 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-combined-ca-bundle\") pod \"5686911a-63ad-487e-8dc4-c9c833c20f51\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.515954 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnsvt\" (UniqueName: \"kubernetes.io/projected/f00dd75e-d42e-41fa-93f2-728409ffcb47-kube-api-access-pnsvt\") pod \"f00dd75e-d42e-41fa-93f2-728409ffcb47\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.516021 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-config-data\") pod \"f00dd75e-d42e-41fa-93f2-728409ffcb47\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.516068 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-combined-ca-bundle\") pod \"f00dd75e-d42e-41fa-93f2-728409ffcb47\" (UID: \"f00dd75e-d42e-41fa-93f2-728409ffcb47\") " Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.516145 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-fernet-keys\") pod \"5686911a-63ad-487e-8dc4-c9c833c20f51\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.516184 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d5hs\" (UniqueName: \"kubernetes.io/projected/5686911a-63ad-487e-8dc4-c9c833c20f51-kube-api-access-4d5hs\") pod \"5686911a-63ad-487e-8dc4-c9c833c20f51\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.516295 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-config-data\") pod \"5686911a-63ad-487e-8dc4-c9c833c20f51\" (UID: \"5686911a-63ad-487e-8dc4-c9c833c20f51\") " Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.539664 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-scripts" (OuterVolumeSpecName: "scripts") pod "f00dd75e-d42e-41fa-93f2-728409ffcb47" (UID: "f00dd75e-d42e-41fa-93f2-728409ffcb47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.539797 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5686911a-63ad-487e-8dc4-c9c833c20f51-kube-api-access-4d5hs" (OuterVolumeSpecName: "kube-api-access-4d5hs") pod "5686911a-63ad-487e-8dc4-c9c833c20f51" (UID: "5686911a-63ad-487e-8dc4-c9c833c20f51"). InnerVolumeSpecName "kube-api-access-4d5hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.540441 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5686911a-63ad-487e-8dc4-c9c833c20f51" (UID: "5686911a-63ad-487e-8dc4-c9c833c20f51"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.551586 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00dd75e-d42e-41fa-93f2-728409ffcb47-kube-api-access-pnsvt" (OuterVolumeSpecName: "kube-api-access-pnsvt") pod "f00dd75e-d42e-41fa-93f2-728409ffcb47" (UID: "f00dd75e-d42e-41fa-93f2-728409ffcb47"). InnerVolumeSpecName "kube-api-access-pnsvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.556153 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-config-data" (OuterVolumeSpecName: "config-data") pod "f00dd75e-d42e-41fa-93f2-728409ffcb47" (UID: "f00dd75e-d42e-41fa-93f2-728409ffcb47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.556205 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5686911a-63ad-487e-8dc4-c9c833c20f51" (UID: "5686911a-63ad-487e-8dc4-c9c833c20f51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.557864 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f00dd75e-d42e-41fa-93f2-728409ffcb47" (UID: "f00dd75e-d42e-41fa-93f2-728409ffcb47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.589448 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-config-data" (OuterVolumeSpecName: "config-data") pod "5686911a-63ad-487e-8dc4-c9c833c20f51" (UID: "5686911a-63ad-487e-8dc4-c9c833c20f51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.619672 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.619717 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.619729 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.619748 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnsvt\" (UniqueName: \"kubernetes.io/projected/f00dd75e-d42e-41fa-93f2-728409ffcb47-kube-api-access-pnsvt\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.619759 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.619769 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00dd75e-d42e-41fa-93f2-728409ffcb47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.619778 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686911a-63ad-487e-8dc4-c9c833c20f51-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.619788 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d5hs\" (UniqueName: \"kubernetes.io/projected/5686911a-63ad-487e-8dc4-c9c833c20f51-kube-api-access-4d5hs\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.661661 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5d4rn" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.669259 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422981-hs2nn" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.712207 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5d4rn" event={"ID":"f00dd75e-d42e-41fa-93f2-728409ffcb47","Type":"ContainerDied","Data":"91d2599f586d9d843e7d86878d197f06ae4719c28280f0f1d45ea4c63c3d850a"} Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.712261 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d2599f586d9d843e7d86878d197f06ae4719c28280f0f1d45ea4c63c3d850a" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.712276 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422981-hs2nn" event={"ID":"5686911a-63ad-487e-8dc4-c9c833c20f51","Type":"ContainerDied","Data":"7575b75e3c9d8d3d18af20c123b17e4266ded492411aec98442012f03eeaa127"} Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.712311 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7575b75e3c9d8d3d18af20c123b17e4266ded492411aec98442012f03eeaa127" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.727592 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 15:01:12 crc kubenswrapper[4727]: E1210 15:01:12.728435 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00dd75e-d42e-41fa-93f2-728409ffcb47" containerName="nova-cell1-conductor-db-sync" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.728459 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00dd75e-d42e-41fa-93f2-728409ffcb47" containerName="nova-cell1-conductor-db-sync" Dec 10 15:01:12 crc kubenswrapper[4727]: E1210 15:01:12.728514 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5686911a-63ad-487e-8dc4-c9c833c20f51" containerName="keystone-cron" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.728523 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5686911a-63ad-487e-8dc4-c9c833c20f51" containerName="keystone-cron" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.728852 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00dd75e-d42e-41fa-93f2-728409ffcb47" containerName="nova-cell1-conductor-db-sync" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.728910 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5686911a-63ad-487e-8dc4-c9c833c20f51" containerName="keystone-cron" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.732365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.735219 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 10 15:01:12 crc kubenswrapper[4727]: I1210 15:01:12.745359 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.129244 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b933c0-cf5d-491a-99f2-901be53e7950-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d5b933c0-cf5d-491a-99f2-901be53e7950\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.129546 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvrr\" (UniqueName: \"kubernetes.io/projected/d5b933c0-cf5d-491a-99f2-901be53e7950-kube-api-access-8bvrr\") pod \"nova-cell1-conductor-0\" (UID: \"d5b933c0-cf5d-491a-99f2-901be53e7950\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.129695 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b933c0-cf5d-491a-99f2-901be53e7950-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d5b933c0-cf5d-491a-99f2-901be53e7950\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.230885 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bvrr\" (UniqueName: \"kubernetes.io/projected/d5b933c0-cf5d-491a-99f2-901be53e7950-kube-api-access-8bvrr\") pod \"nova-cell1-conductor-0\" (UID: \"d5b933c0-cf5d-491a-99f2-901be53e7950\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.231071 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b933c0-cf5d-491a-99f2-901be53e7950-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d5b933c0-cf5d-491a-99f2-901be53e7950\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.231145 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b933c0-cf5d-491a-99f2-901be53e7950-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d5b933c0-cf5d-491a-99f2-901be53e7950\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.237405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b933c0-cf5d-491a-99f2-901be53e7950-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d5b933c0-cf5d-491a-99f2-901be53e7950\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.240214 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b933c0-cf5d-491a-99f2-901be53e7950-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d5b933c0-cf5d-491a-99f2-901be53e7950\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.253725 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bvrr\" (UniqueName: \"kubernetes.io/projected/d5b933c0-cf5d-491a-99f2-901be53e7950-kube-api-access-8bvrr\") pod \"nova-cell1-conductor-0\" (UID: \"d5b933c0-cf5d-491a-99f2-901be53e7950\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.354091 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:13 crc kubenswrapper[4727]: W1210 15:01:13.833280 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b933c0_cf5d_491a_99f2_901be53e7950.slice/crio-66489fb0f504822902a02988cb72c5dde45165ba996011908e4d30a05b04d2e8 WatchSource:0}: Error finding container 66489fb0f504822902a02988cb72c5dde45165ba996011908e4d30a05b04d2e8: Status 404 returned error can't find the container with id 66489fb0f504822902a02988cb72c5dde45165ba996011908e4d30a05b04d2e8 Dec 10 15:01:13 crc kubenswrapper[4727]: I1210 15:01:13.835822 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 15:01:14 crc kubenswrapper[4727]: I1210 15:01:14.237892 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:01:14 crc kubenswrapper[4727]: I1210 15:01:14.239274 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:01:14 crc kubenswrapper[4727]: I1210 15:01:14.242844 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:01:14 crc kubenswrapper[4727]: I1210 15:01:14.246126 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:01:14 crc kubenswrapper[4727]: I1210 15:01:14.777563 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d5b933c0-cf5d-491a-99f2-901be53e7950","Type":"ContainerStarted","Data":"66489fb0f504822902a02988cb72c5dde45165ba996011908e4d30a05b04d2e8"} Dec 10 15:01:14 crc kubenswrapper[4727]: I1210 15:01:14.778086 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:01:14 crc kubenswrapper[4727]: I1210 15:01:14.789250 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:01:15 crc kubenswrapper[4727]: I1210 15:01:15.789974 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d5b933c0-cf5d-491a-99f2-901be53e7950","Type":"ContainerStarted","Data":"3ac867c47cc01dc1944dae62f67ced83682d8bf066be61881ebf7672a1624a50"} Dec 10 15:01:15 crc kubenswrapper[4727]: I1210 15:01:15.810964 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.810941408 podStartE2EDuration="3.810941408s" podCreationTimestamp="2025-12-10 15:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:15.805148701 +0000 UTC m=+1779.999923233" watchObservedRunningTime="2025-12-10 15:01:15.810941408 +0000 UTC m=+1780.005715950" Dec 10 15:01:16 crc kubenswrapper[4727]: I1210 15:01:16.806764 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:17 crc kubenswrapper[4727]: I1210 15:01:17.563950 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:01:17 crc kubenswrapper[4727]: E1210 15:01:17.564295 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:01:22 crc kubenswrapper[4727]: I1210 15:01:22.369460 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:22 crc kubenswrapper[4727]: I1210 15:01:22.369531 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="21895a9a-4f96-4f38-b042-8fe46be851d3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:23 crc kubenswrapper[4727]: I1210 15:01:23.387165 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 10 15:01:24 crc kubenswrapper[4727]: I1210 15:01:24.934535 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sv8qp"] Dec 10 15:01:24 crc kubenswrapper[4727]: I1210 15:01:24.936417 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:24 crc kubenswrapper[4727]: I1210 15:01:24.939974 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 10 15:01:24 crc kubenswrapper[4727]: I1210 15:01:24.940107 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 10 15:01:24 crc kubenswrapper[4727]: I1210 15:01:24.956736 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sv8qp"] Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.044217 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.044386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nnp6\" (UniqueName: \"kubernetes.io/projected/b60ac517-553a-4ca2-a7f5-b2617ce048f6-kube-api-access-8nnp6\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.044432 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-scripts\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.044488 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-config-data\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.146155 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.146283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nnp6\" (UniqueName: \"kubernetes.io/projected/b60ac517-553a-4ca2-a7f5-b2617ce048f6-kube-api-access-8nnp6\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.146313 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-scripts\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.146351 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-config-data\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.153972 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.154161 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-scripts\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.156262 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-config-data\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.176092 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nnp6\" (UniqueName: \"kubernetes.io/projected/b60ac517-553a-4ca2-a7f5-b2617ce048f6-kube-api-access-8nnp6\") pod \"nova-cell1-cell-mapping-sv8qp\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.265433 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:25 crc kubenswrapper[4727]: I1210 15:01:25.789937 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sv8qp"] Dec 10 15:01:25 crc kubenswrapper[4727]: W1210 15:01:25.790727 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb60ac517_553a_4ca2_a7f5_b2617ce048f6.slice/crio-066de5eb7735057d79e3dd85e61ce10aba909ff3cc5e4f27d0837cb4385edda6 WatchSource:0}: Error finding container 066de5eb7735057d79e3dd85e61ce10aba909ff3cc5e4f27d0837cb4385edda6: Status 404 returned error can't find the container with id 066de5eb7735057d79e3dd85e61ce10aba909ff3cc5e4f27d0837cb4385edda6 Dec 10 15:01:26 crc kubenswrapper[4727]: I1210 15:01:26.462751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sv8qp" event={"ID":"b60ac517-553a-4ca2-a7f5-b2617ce048f6","Type":"ContainerStarted","Data":"fcd47e44c0fde09bdb321662c78e09baa0175b7fd0774e4ffc81adce4f34fe98"} Dec 10 15:01:26 crc kubenswrapper[4727]: I1210 15:01:26.463180 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sv8qp" event={"ID":"b60ac517-553a-4ca2-a7f5-b2617ce048f6","Type":"ContainerStarted","Data":"066de5eb7735057d79e3dd85e61ce10aba909ff3cc5e4f27d0837cb4385edda6"} Dec 10 15:01:26 crc kubenswrapper[4727]: I1210 15:01:26.489524 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sv8qp" podStartSLOduration=2.489501562 podStartE2EDuration="2.489501562s" podCreationTimestamp="2025-12-10 15:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:26.485123261 +0000 UTC m=+1790.679897813" watchObservedRunningTime="2025-12-10 15:01:26.489501562 +0000 UTC m=+1790.684276094" Dec 10 15:01:29 crc kubenswrapper[4727]: I1210 15:01:29.563695 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:01:29 crc kubenswrapper[4727]: E1210 15:01:29.565854 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:01:29 crc kubenswrapper[4727]: I1210 15:01:29.572884 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4727]: E1210 15:01:32.416660 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb60ac517_553a_4ca2_a7f5_b2617ce048f6.slice/crio-fcd47e44c0fde09bdb321662c78e09baa0175b7fd0774e4ffc81adce4f34fe98.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb60ac517_553a_4ca2_a7f5_b2617ce048f6.slice/crio-conmon-fcd47e44c0fde09bdb321662c78e09baa0175b7fd0774e4ffc81adce4f34fe98.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:01:32 crc kubenswrapper[4727]: I1210 15:01:32.588034 4727 generic.go:334] "Generic (PLEG): container finished" podID="b60ac517-553a-4ca2-a7f5-b2617ce048f6" containerID="fcd47e44c0fde09bdb321662c78e09baa0175b7fd0774e4ffc81adce4f34fe98" exitCode=0 Dec 10 15:01:32 crc kubenswrapper[4727]: I1210 15:01:32.588107 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sv8qp" event={"ID":"b60ac517-553a-4ca2-a7f5-b2617ce048f6","Type":"ContainerDied","Data":"fcd47e44c0fde09bdb321662c78e09baa0175b7fd0774e4ffc81adce4f34fe98"} Dec 10 15:01:33 crc kubenswrapper[4727]: I1210 15:01:33.989209 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:01:33 crc kubenswrapper[4727]: I1210 15:01:33.989776 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5d7bed75-52e4-4b97-830b-f2b55f222732" containerName="kube-state-metrics" containerID="cri-o://1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa" gracePeriod=30 Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.228116 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.358140 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-scripts\") pod \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.358246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nnp6\" (UniqueName: \"kubernetes.io/projected/b60ac517-553a-4ca2-a7f5-b2617ce048f6-kube-api-access-8nnp6\") pod \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.358345 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-config-data\") pod \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.358450 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-combined-ca-bundle\") pod \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\" (UID: \"b60ac517-553a-4ca2-a7f5-b2617ce048f6\") " Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.367190 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-scripts" (OuterVolumeSpecName: "scripts") pod "b60ac517-553a-4ca2-a7f5-b2617ce048f6" (UID: "b60ac517-553a-4ca2-a7f5-b2617ce048f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.369003 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60ac517-553a-4ca2-a7f5-b2617ce048f6-kube-api-access-8nnp6" (OuterVolumeSpecName: "kube-api-access-8nnp6") pod "b60ac517-553a-4ca2-a7f5-b2617ce048f6" (UID: "b60ac517-553a-4ca2-a7f5-b2617ce048f6"). InnerVolumeSpecName "kube-api-access-8nnp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.414729 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b60ac517-553a-4ca2-a7f5-b2617ce048f6" (UID: "b60ac517-553a-4ca2-a7f5-b2617ce048f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.415097 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-config-data" (OuterVolumeSpecName: "config-data") pod "b60ac517-553a-4ca2-a7f5-b2617ce048f6" (UID: "b60ac517-553a-4ca2-a7f5-b2617ce048f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.461533 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.461582 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nnp6\" (UniqueName: \"kubernetes.io/projected/b60ac517-553a-4ca2-a7f5-b2617ce048f6-kube-api-access-8nnp6\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.461596 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.461610 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60ac517-553a-4ca2-a7f5-b2617ce048f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.511259 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.614334 4727 generic.go:334] "Generic (PLEG): container finished" podID="5d7bed75-52e4-4b97-830b-f2b55f222732" containerID="1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa" exitCode=2 Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.615222 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d7bed75-52e4-4b97-830b-f2b55f222732","Type":"ContainerDied","Data":"1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa"} Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.615311 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.615473 4727 scope.go:117] "RemoveContainer" containerID="1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.615392 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d7bed75-52e4-4b97-830b-f2b55f222732","Type":"ContainerDied","Data":"edf1a40d1d26378d7a48bfbf46741645d8fc94474c27c8ba827542f545d027cc"} Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.622369 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sv8qp" event={"ID":"b60ac517-553a-4ca2-a7f5-b2617ce048f6","Type":"ContainerDied","Data":"066de5eb7735057d79e3dd85e61ce10aba909ff3cc5e4f27d0837cb4385edda6"} Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.622418 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="066de5eb7735057d79e3dd85e61ce10aba909ff3cc5e4f27d0837cb4385edda6" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.622497 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sv8qp" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.931298 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msms2\" (UniqueName: \"kubernetes.io/projected/5d7bed75-52e4-4b97-830b-f2b55f222732-kube-api-access-msms2\") pod \"5d7bed75-52e4-4b97-830b-f2b55f222732\" (UID: \"5d7bed75-52e4-4b97-830b-f2b55f222732\") " Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.944986 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d7bed75-52e4-4b97-830b-f2b55f222732-kube-api-access-msms2" (OuterVolumeSpecName: "kube-api-access-msms2") pod "5d7bed75-52e4-4b97-830b-f2b55f222732" (UID: "5d7bed75-52e4-4b97-830b-f2b55f222732"). InnerVolumeSpecName "kube-api-access-msms2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.968263 4727 scope.go:117] "RemoveContainer" containerID="1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa" Dec 10 15:01:34 crc kubenswrapper[4727]: E1210 15:01:34.981892 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa\": container with ID starting with 1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa not found: ID does not exist" containerID="1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa" Dec 10 15:01:34 crc kubenswrapper[4727]: I1210 15:01:34.981970 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa"} err="failed to get container status \"1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa\": rpc error: code = NotFound desc = could not find container \"1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa\": container with ID starting with 1dbf2f960a48083059cf0dfcfad95fb6c576364fefd4fdae813dfd926a2e9caa not found: ID does not exist" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.042185 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msms2\" (UniqueName: \"kubernetes.io/projected/5d7bed75-52e4-4b97-830b-f2b55f222732-kube-api-access-msms2\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.134415 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.134762 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-log" containerID="cri-o://aea51930c49240ad76b6dff47b4bbc871989fec9e59473f28ee09246ba762f94" gracePeriod=30 Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.134952 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-api" containerID="cri-o://9c7ef0c8d771309efa2212fafd523c9c7966410b28842f7fcd267534ea4c6b2a" gracePeriod=30 Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.187382 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.188024 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-log" containerID="cri-o://cd87fe242f8051ea4656d6681f0e0ca849089acbbe2003c5ca2494ffa8c68679" gracePeriod=30 Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.188218 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-metadata" containerID="cri-o://c0f245fd005e58c95d1f653f4388c2c65025cbeffb46e8abd8107e40428bfc2f" gracePeriod=30 Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.202185 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.202484 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" containerName="nova-scheduler-scheduler" containerID="cri-o://f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b" gracePeriod=30 Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.410198 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.448038 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.469246 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:01:35 crc kubenswrapper[4727]: E1210 15:01:35.469910 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60ac517-553a-4ca2-a7f5-b2617ce048f6" containerName="nova-manage" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.469959 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60ac517-553a-4ca2-a7f5-b2617ce048f6" containerName="nova-manage" Dec 10 15:01:35 crc kubenswrapper[4727]: E1210 15:01:35.470018 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d7bed75-52e4-4b97-830b-f2b55f222732" containerName="kube-state-metrics" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.470029 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d7bed75-52e4-4b97-830b-f2b55f222732" containerName="kube-state-metrics" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.470320 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d7bed75-52e4-4b97-830b-f2b55f222732" containerName="kube-state-metrics" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.470346 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60ac517-553a-4ca2-a7f5-b2617ce048f6" containerName="nova-manage" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.471420 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.477495 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.478172 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.515427 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.557520 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4pdq\" (UniqueName: \"kubernetes.io/projected/84ba56c7-2390-4e8f-a47b-690a94da6c20-kube-api-access-b4pdq\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.557636 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/84ba56c7-2390-4e8f-a47b-690a94da6c20-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.557727 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ba56c7-2390-4e8f-a47b-690a94da6c20-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.557793 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/84ba56c7-2390-4e8f-a47b-690a94da6c20-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.661357 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ba56c7-2390-4e8f-a47b-690a94da6c20-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.661456 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/84ba56c7-2390-4e8f-a47b-690a94da6c20-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.661578 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4pdq\" (UniqueName: \"kubernetes.io/projected/84ba56c7-2390-4e8f-a47b-690a94da6c20-kube-api-access-b4pdq\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.661647 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/84ba56c7-2390-4e8f-a47b-690a94da6c20-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.669242 4727 generic.go:334] "Generic (PLEG): container finished" podID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerID="cd87fe242f8051ea4656d6681f0e0ca849089acbbe2003c5ca2494ffa8c68679" exitCode=143 Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.669356 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7","Type":"ContainerDied","Data":"cd87fe242f8051ea4656d6681f0e0ca849089acbbe2003c5ca2494ffa8c68679"} Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.669949 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/84ba56c7-2390-4e8f-a47b-690a94da6c20-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.669978 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/84ba56c7-2390-4e8f-a47b-690a94da6c20-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.680803 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ba56c7-2390-4e8f-a47b-690a94da6c20-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.696882 4727 generic.go:334] "Generic (PLEG): container finished" podID="e4076258-0b0c-4084-ad95-a75be0238578" containerID="aea51930c49240ad76b6dff47b4bbc871989fec9e59473f28ee09246ba762f94" exitCode=143 Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.697012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4076258-0b0c-4084-ad95-a75be0238578","Type":"ContainerDied","Data":"aea51930c49240ad76b6dff47b4bbc871989fec9e59473f28ee09246ba762f94"} Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.710622 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4pdq\" (UniqueName: \"kubernetes.io/projected/84ba56c7-2390-4e8f-a47b-690a94da6c20-kube-api-access-b4pdq\") pod \"kube-state-metrics-0\" (UID: \"84ba56c7-2390-4e8f-a47b-690a94da6c20\") " pod="openstack/kube-state-metrics-0" Dec 10 15:01:35 crc kubenswrapper[4727]: I1210 15:01:35.804338 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:01:36 crc kubenswrapper[4727]: I1210 15:01:36.396296 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:01:36 crc kubenswrapper[4727]: I1210 15:01:36.586050 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d7bed75-52e4-4b97-830b-f2b55f222732" path="/var/lib/kubelet/pods/5d7bed75-52e4-4b97-830b-f2b55f222732/volumes" Dec 10 15:01:36 crc kubenswrapper[4727]: I1210 15:01:36.765276 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"84ba56c7-2390-4e8f-a47b-690a94da6c20","Type":"ContainerStarted","Data":"ccf9ac426d958ffeea5847517b6eb36b0e324b49a2836131902a953910c64394"} Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.765657 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.766333 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="ceilometer-central-agent" containerID="cri-o://fd3850753860bffa8f1f01595dd4e45f9223c5a2265cfc3dc3d355829f67b3fd" gracePeriod=30 Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.766477 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="proxy-httpd" containerID="cri-o://fb6e5a61ff936a6d199bbfcb2b378dc57b41deb874c425f35ac7d9b763fdf02f" gracePeriod=30 Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.766533 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="sg-core" containerID="cri-o://ce34707cd5d751a596804fd58bd973206cc4477ef40f33cc3ef1b748fe5fb58b" gracePeriod=30 Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.766577 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="ceilometer-notification-agent" containerID="cri-o://932455ff6e476d141d22f9cb369e7ac3975796b19874014c52da8e1222504607" gracePeriod=30 Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.798828 4727 generic.go:334] "Generic (PLEG): container finished" podID="66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" containerID="f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b" exitCode=0 Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.799067 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d","Type":"ContainerDied","Data":"f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b"} Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.801491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"84ba56c7-2390-4e8f-a47b-690a94da6c20","Type":"ContainerStarted","Data":"92b6657b677beb5cd9e36db4a29bd6975ca1a7d31d2a1b95232dc6f160150a76"} Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.801639 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 10 15:01:37 crc kubenswrapper[4727]: I1210 15:01:37.830179 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.431974142 podStartE2EDuration="2.830158835s" podCreationTimestamp="2025-12-10 15:01:35 +0000 UTC" firstStartedPulling="2025-12-10 15:01:36.404689965 +0000 UTC m=+1800.599464507" lastFinishedPulling="2025-12-10 15:01:36.802874658 +0000 UTC m=+1800.997649200" observedRunningTime="2025-12-10 15:01:37.82637504 +0000 UTC m=+1802.021149582" watchObservedRunningTime="2025-12-10 15:01:37.830158835 +0000 UTC m=+1802.024933387" Dec 10 15:01:37 crc kubenswrapper[4727]: E1210 15:01:37.998992 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b is running failed: container process not found" containerID="f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:01:37 crc kubenswrapper[4727]: E1210 15:01:37.999384 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b is running failed: container process not found" containerID="f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:01:38 crc kubenswrapper[4727]: E1210 15:01:38.006705 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b is running failed: container process not found" containerID="f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:01:38 crc kubenswrapper[4727]: E1210 15:01:38.006787 4727 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" containerName="nova-scheduler-scheduler" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.036855 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.220395 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-combined-ca-bundle\") pod \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.220589 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7cln\" (UniqueName: \"kubernetes.io/projected/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-kube-api-access-r7cln\") pod \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.220618 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-config-data\") pod \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\" (UID: \"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d\") " Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.225704 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-kube-api-access-r7cln" (OuterVolumeSpecName: "kube-api-access-r7cln") pod "66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" (UID: "66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d"). InnerVolumeSpecName "kube-api-access-r7cln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.251470 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-config-data" (OuterVolumeSpecName: "config-data") pod "66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" (UID: "66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.261530 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" (UID: "66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.324091 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.324127 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7cln\" (UniqueName: \"kubernetes.io/projected/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-kube-api-access-r7cln\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.324154 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.851350 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d","Type":"ContainerDied","Data":"3402f6d60114e046f5fab7af27ce797c16fc6800f9c537733e8af288fe2ef829"} Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.851672 4727 scope.go:117] "RemoveContainer" containerID="f424e0aa0d6beee5332afa73605ab85efe020142d2017aa8876f52fc32ed1d3b" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.851830 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.871551 4727 generic.go:334] "Generic (PLEG): container finished" podID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerID="fb6e5a61ff936a6d199bbfcb2b378dc57b41deb874c425f35ac7d9b763fdf02f" exitCode=0 Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.871582 4727 generic.go:334] "Generic (PLEG): container finished" podID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerID="ce34707cd5d751a596804fd58bd973206cc4477ef40f33cc3ef1b748fe5fb58b" exitCode=2 Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.871589 4727 generic.go:334] "Generic (PLEG): container finished" podID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerID="fd3850753860bffa8f1f01595dd4e45f9223c5a2265cfc3dc3d355829f67b3fd" exitCode=0 Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.871671 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerDied","Data":"fb6e5a61ff936a6d199bbfcb2b378dc57b41deb874c425f35ac7d9b763fdf02f"} Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.871699 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerDied","Data":"ce34707cd5d751a596804fd58bd973206cc4477ef40f33cc3ef1b748fe5fb58b"} Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.871710 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerDied","Data":"fd3850753860bffa8f1f01595dd4e45f9223c5a2265cfc3dc3d355829f67b3fd"} Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.891795 4727 generic.go:334] "Generic (PLEG): container finished" podID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerID="c0f245fd005e58c95d1f653f4388c2c65025cbeffb46e8abd8107e40428bfc2f" exitCode=0 Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.891914 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7","Type":"ContainerDied","Data":"c0f245fd005e58c95d1f653f4388c2c65025cbeffb46e8abd8107e40428bfc2f"} Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.915383 4727 generic.go:334] "Generic (PLEG): container finished" podID="e4076258-0b0c-4084-ad95-a75be0238578" containerID="9c7ef0c8d771309efa2212fafd523c9c7966410b28842f7fcd267534ea4c6b2a" exitCode=0 Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.915481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4076258-0b0c-4084-ad95-a75be0238578","Type":"ContainerDied","Data":"9c7ef0c8d771309efa2212fafd523c9c7966410b28842f7fcd267534ea4c6b2a"} Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.917947 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.946318 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.966078 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:01:38 crc kubenswrapper[4727]: E1210 15:01:38.967180 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" containerName="nova-scheduler-scheduler" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.967203 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" containerName="nova-scheduler-scheduler" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.967596 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" containerName="nova-scheduler-scheduler" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.968442 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.974370 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 15:01:38 crc kubenswrapper[4727]: I1210 15:01:38.978378 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.042876 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f496d39-b701-4789-aaff-d7bf1602a945-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f496d39-b701-4789-aaff-d7bf1602a945\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.042967 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srmnt\" (UniqueName: \"kubernetes.io/projected/4f496d39-b701-4789-aaff-d7bf1602a945-kube-api-access-srmnt\") pod \"nova-scheduler-0\" (UID: \"4f496d39-b701-4789-aaff-d7bf1602a945\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.043136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f496d39-b701-4789-aaff-d7bf1602a945-config-data\") pod \"nova-scheduler-0\" (UID: \"4f496d39-b701-4789-aaff-d7bf1602a945\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.124765 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.144628 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4076258-0b0c-4084-ad95-a75be0238578-logs\") pod \"e4076258-0b0c-4084-ad95-a75be0238578\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.146538 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-public-tls-certs\") pod \"e4076258-0b0c-4084-ad95-a75be0238578\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.146573 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-combined-ca-bundle\") pod \"e4076258-0b0c-4084-ad95-a75be0238578\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.146622 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-internal-tls-certs\") pod \"e4076258-0b0c-4084-ad95-a75be0238578\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.146787 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8t68\" (UniqueName: \"kubernetes.io/projected/e4076258-0b0c-4084-ad95-a75be0238578-kube-api-access-s8t68\") pod \"e4076258-0b0c-4084-ad95-a75be0238578\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.147056 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-config-data\") pod \"e4076258-0b0c-4084-ad95-a75be0238578\" (UID: \"e4076258-0b0c-4084-ad95-a75be0238578\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.147548 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srmnt\" (UniqueName: \"kubernetes.io/projected/4f496d39-b701-4789-aaff-d7bf1602a945-kube-api-access-srmnt\") pod \"nova-scheduler-0\" (UID: \"4f496d39-b701-4789-aaff-d7bf1602a945\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.148077 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f496d39-b701-4789-aaff-d7bf1602a945-config-data\") pod \"nova-scheduler-0\" (UID: \"4f496d39-b701-4789-aaff-d7bf1602a945\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.148241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f496d39-b701-4789-aaff-d7bf1602a945-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f496d39-b701-4789-aaff-d7bf1602a945\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.149457 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4076258-0b0c-4084-ad95-a75be0238578-logs" (OuterVolumeSpecName: "logs") pod "e4076258-0b0c-4084-ad95-a75be0238578" (UID: "e4076258-0b0c-4084-ad95-a75be0238578"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.168294 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f496d39-b701-4789-aaff-d7bf1602a945-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f496d39-b701-4789-aaff-d7bf1602a945\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.171995 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4076258-0b0c-4084-ad95-a75be0238578-kube-api-access-s8t68" (OuterVolumeSpecName: "kube-api-access-s8t68") pod "e4076258-0b0c-4084-ad95-a75be0238578" (UID: "e4076258-0b0c-4084-ad95-a75be0238578"). InnerVolumeSpecName "kube-api-access-s8t68". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.178431 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srmnt\" (UniqueName: \"kubernetes.io/projected/4f496d39-b701-4789-aaff-d7bf1602a945-kube-api-access-srmnt\") pod \"nova-scheduler-0\" (UID: \"4f496d39-b701-4789-aaff-d7bf1602a945\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.186785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f496d39-b701-4789-aaff-d7bf1602a945-config-data\") pod \"nova-scheduler-0\" (UID: \"4f496d39-b701-4789-aaff-d7bf1602a945\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.225508 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-config-data" (OuterVolumeSpecName: "config-data") pod "e4076258-0b0c-4084-ad95-a75be0238578" (UID: "e4076258-0b0c-4084-ad95-a75be0238578"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.256044 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.256658 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4076258-0b0c-4084-ad95-a75be0238578-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.256776 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8t68\" (UniqueName: \"kubernetes.io/projected/e4076258-0b0c-4084-ad95-a75be0238578-kube-api-access-s8t68\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.276135 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e4076258-0b0c-4084-ad95-a75be0238578" (UID: "e4076258-0b0c-4084-ad95-a75be0238578"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.298037 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.308635 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4076258-0b0c-4084-ad95-a75be0238578" (UID: "e4076258-0b0c-4084-ad95-a75be0238578"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.359021 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.359062 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.376338 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e4076258-0b0c-4084-ad95-a75be0238578" (UID: "e4076258-0b0c-4084-ad95-a75be0238578"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.631714 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4076258-0b0c-4084-ad95-a75be0238578-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.692160 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.733009 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-config-data\") pod \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.733441 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-logs\") pod \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.733516 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmcpf\" (UniqueName: \"kubernetes.io/projected/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-kube-api-access-rmcpf\") pod \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.733633 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-nova-metadata-tls-certs\") pod \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.733655 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-combined-ca-bundle\") pod \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\" (UID: \"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7\") " Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.738714 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-logs" (OuterVolumeSpecName: "logs") pod "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" (UID: "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.744737 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-kube-api-access-rmcpf" (OuterVolumeSpecName: "kube-api-access-rmcpf") pod "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" (UID: "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7"). InnerVolumeSpecName "kube-api-access-rmcpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.776462 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" (UID: "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.793546 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-config-data" (OuterVolumeSpecName: "config-data") pod "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" (UID: "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.813358 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" (UID: "6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.837247 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.837296 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmcpf\" (UniqueName: \"kubernetes.io/projected/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-kube-api-access-rmcpf\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.837309 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.837322 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:39 crc kubenswrapper[4727]: I1210 15:01:39.837334 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:39.929759 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4076258-0b0c-4084-ad95-a75be0238578","Type":"ContainerDied","Data":"8340c7c5411c949d41495472854f537de9f73a3f1f15cfd58887998d5a5411af"} Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:39.929808 4727 scope.go:117] "RemoveContainer" containerID="9c7ef0c8d771309efa2212fafd523c9c7966410b28842f7fcd267534ea4c6b2a" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:39.930040 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:39.943694 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7","Type":"ContainerDied","Data":"4e1525611ca8b7e52e3e966f3e68d8f00b64a2dd389241853413b64ac87bc5c6"} Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:39.943803 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: W1210 15:01:39.953334 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f496d39_b701_4789_aaff_d7bf1602a945.slice/crio-cffa87c64cf757da7f732c2905c2a59d374ada47ca64be3386b4ce71972ea736 WatchSource:0}: Error finding container cffa87c64cf757da7f732c2905c2a59d374ada47ca64be3386b4ce71972ea736: Status 404 returned error can't find the container with id cffa87c64cf757da7f732c2905c2a59d374ada47ca64be3386b4ce71972ea736 Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:39.955013 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:39.998832 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.022825 4727 scope.go:117] "RemoveContainer" containerID="aea51930c49240ad76b6dff47b4bbc871989fec9e59473f28ee09246ba762f94" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.023004 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.038197 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.052165 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.075679 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:01:40 crc kubenswrapper[4727]: E1210 15:01:40.077454 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-log" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.077481 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-log" Dec 10 15:01:40 crc kubenswrapper[4727]: E1210 15:01:40.077510 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-metadata" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.077519 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-metadata" Dec 10 15:01:40 crc kubenswrapper[4727]: E1210 15:01:40.077530 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-api" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.077538 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-api" Dec 10 15:01:40 crc kubenswrapper[4727]: E1210 15:01:40.077549 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-log" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.077559 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-log" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.077883 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-api" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.077953 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-metadata" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.077969 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" containerName="nova-metadata-log" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.077992 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4076258-0b0c-4084-ad95-a75be0238578" containerName="nova-api-log" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.080920 4727 scope.go:117] "RemoveContainer" containerID="c0f245fd005e58c95d1f653f4388c2c65025cbeffb46e8abd8107e40428bfc2f" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.082203 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.086739 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.086955 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.087417 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.088203 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.098066 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.103651 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.106151 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.106464 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.108654 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.153477 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q25kc\" (UniqueName: \"kubernetes.io/projected/4a402d8e-da25-43fd-a5c8-bb0588d544ab-kube-api-access-q25kc\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.153869 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-logs\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.153961 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.154006 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.154027 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.154066 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-public-tls-certs\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.154100 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.154398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-config-data\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.154473 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d68r6\" (UniqueName: \"kubernetes.io/projected/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-kube-api-access-d68r6\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.154670 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a402d8e-da25-43fd-a5c8-bb0588d544ab-logs\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.154780 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-config-data\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.170952 4727 scope.go:117] "RemoveContainer" containerID="cd87fe242f8051ea4656d6681f0e0ca849089acbbe2003c5ca2494ffa8c68679" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.257167 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d68r6\" (UniqueName: \"kubernetes.io/projected/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-kube-api-access-d68r6\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.257307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a402d8e-da25-43fd-a5c8-bb0588d544ab-logs\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.257362 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-config-data\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.257403 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q25kc\" (UniqueName: \"kubernetes.io/projected/4a402d8e-da25-43fd-a5c8-bb0588d544ab-kube-api-access-q25kc\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.257434 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-logs\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.257992 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.258041 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-logs\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.258054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.258136 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.258241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-public-tls-certs\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.258325 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.258583 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-config-data\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.258049 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a402d8e-da25-43fd-a5c8-bb0588d544ab-logs\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.266465 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-config-data\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.267238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.268741 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-config-data\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.269243 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.271466 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a402d8e-da25-43fd-a5c8-bb0588d544ab-public-tls-certs\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.279865 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.284393 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.284489 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q25kc\" (UniqueName: \"kubernetes.io/projected/4a402d8e-da25-43fd-a5c8-bb0588d544ab-kube-api-access-q25kc\") pod \"nova-api-0\" (UID: \"4a402d8e-da25-43fd-a5c8-bb0588d544ab\") " pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.286280 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d68r6\" (UniqueName: \"kubernetes.io/projected/86934207-9ef6-488d-8f95-4ab3ad0c5fc7-kube-api-access-d68r6\") pod \"nova-metadata-0\" (UID: \"86934207-9ef6-488d-8f95-4ab3ad0c5fc7\") " pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.427256 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.439520 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.655840 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d" path="/var/lib/kubelet/pods/66e8f8d6-cb02-4c56-8579-1d4ae3d2bf1d/volumes" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.657199 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7" path="/var/lib/kubelet/pods/6f9a6e9b-5b12-4812-9d39-3ac67dfe6bb7/volumes" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.657932 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4076258-0b0c-4084-ad95-a75be0238578" path="/var/lib/kubelet/pods/e4076258-0b0c-4084-ad95-a75be0238578/volumes" Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.956719 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f496d39-b701-4789-aaff-d7bf1602a945","Type":"ContainerStarted","Data":"bbd890fddf8389eb84d965ffd258b572edf4684ee943329f2932121ead8f1c2d"} Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.956765 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f496d39-b701-4789-aaff-d7bf1602a945","Type":"ContainerStarted","Data":"cffa87c64cf757da7f732c2905c2a59d374ada47ca64be3386b4ce71972ea736"} Dec 10 15:01:40 crc kubenswrapper[4727]: I1210 15:01:40.975634 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.975610405 podStartE2EDuration="2.975610405s" podCreationTimestamp="2025-12-10 15:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:40.972423614 +0000 UTC m=+1805.167198156" watchObservedRunningTime="2025-12-10 15:01:40.975610405 +0000 UTC m=+1805.170384957" Dec 10 15:01:41 crc kubenswrapper[4727]: I1210 15:01:41.179718 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:41 crc kubenswrapper[4727]: W1210 15:01:41.179950 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a402d8e_da25_43fd_a5c8_bb0588d544ab.slice/crio-7aa13609f9bc6098c85d216f0dc5d42152af492c7faaa60e0ff35c264508accf WatchSource:0}: Error finding container 7aa13609f9bc6098c85d216f0dc5d42152af492c7faaa60e0ff35c264508accf: Status 404 returned error can't find the container with id 7aa13609f9bc6098c85d216f0dc5d42152af492c7faaa60e0ff35c264508accf Dec 10 15:01:41 crc kubenswrapper[4727]: W1210 15:01:41.182498 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86934207_9ef6_488d_8f95_4ab3ad0c5fc7.slice/crio-2ee12ce3b0af14274eedaa4d297e568eee295decea2a4c1846b2a839401f4cb6 WatchSource:0}: Error finding container 2ee12ce3b0af14274eedaa4d297e568eee295decea2a4c1846b2a839401f4cb6: Status 404 returned error can't find the container with id 2ee12ce3b0af14274eedaa4d297e568eee295decea2a4c1846b2a839401f4cb6 Dec 10 15:01:41 crc kubenswrapper[4727]: I1210 15:01:41.196783 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:01:41 crc kubenswrapper[4727]: I1210 15:01:41.979067 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86934207-9ef6-488d-8f95-4ab3ad0c5fc7","Type":"ContainerStarted","Data":"ba7404af7a9e6f15af62cf2022e5c5033f8519c5624da704749ceb42add43b27"} Dec 10 15:01:41 crc kubenswrapper[4727]: I1210 15:01:41.979697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86934207-9ef6-488d-8f95-4ab3ad0c5fc7","Type":"ContainerStarted","Data":"2ee12ce3b0af14274eedaa4d297e568eee295decea2a4c1846b2a839401f4cb6"} Dec 10 15:01:41 crc kubenswrapper[4727]: I1210 15:01:41.983280 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a402d8e-da25-43fd-a5c8-bb0588d544ab","Type":"ContainerStarted","Data":"fcec4e650267ab4eaa4a880627026bdf777b6db41315d53c96169af13d58bba0"} Dec 10 15:01:41 crc kubenswrapper[4727]: I1210 15:01:41.983316 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a402d8e-da25-43fd-a5c8-bb0588d544ab","Type":"ContainerStarted","Data":"7aa13609f9bc6098c85d216f0dc5d42152af492c7faaa60e0ff35c264508accf"} Dec 10 15:01:42 crc kubenswrapper[4727]: I1210 15:01:42.998984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86934207-9ef6-488d-8f95-4ab3ad0c5fc7","Type":"ContainerStarted","Data":"857b210a80381ddd51a3592e151264059de8fa989e6f8a774d460e15061d01e5"} Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.007563 4727 generic.go:334] "Generic (PLEG): container finished" podID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerID="932455ff6e476d141d22f9cb369e7ac3975796b19874014c52da8e1222504607" exitCode=0 Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.007647 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerDied","Data":"932455ff6e476d141d22f9cb369e7ac3975796b19874014c52da8e1222504607"} Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.028044 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a402d8e-da25-43fd-a5c8-bb0588d544ab","Type":"ContainerStarted","Data":"9e31e98d751adbf297940f6f84be4545e860406ba3d19aead1bd4d0c5e4821f0"} Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.051020 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.050990606 podStartE2EDuration="3.050990606s" podCreationTimestamp="2025-12-10 15:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:43.022391272 +0000 UTC m=+1807.217165814" watchObservedRunningTime="2025-12-10 15:01:43.050990606 +0000 UTC m=+1807.245765148" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.064390 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.064370284 podStartE2EDuration="4.064370284s" podCreationTimestamp="2025-12-10 15:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:43.063191404 +0000 UTC m=+1807.257965946" watchObservedRunningTime="2025-12-10 15:01:43.064370284 +0000 UTC m=+1807.259144826" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.441789 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.506992 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-combined-ca-bundle\") pod \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.507048 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-scripts\") pod \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.507085 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-sg-core-conf-yaml\") pod \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.507128 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-log-httpd\") pod \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.507154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkvnp\" (UniqueName: \"kubernetes.io/projected/be66e59e-7621-4438-8c4c-5d5791a3f3f8-kube-api-access-bkvnp\") pod \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.507341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-run-httpd\") pod \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.507371 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-config-data\") pod \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\" (UID: \"be66e59e-7621-4438-8c4c-5d5791a3f3f8\") " Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.508674 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be66e59e-7621-4438-8c4c-5d5791a3f3f8" (UID: "be66e59e-7621-4438-8c4c-5d5791a3f3f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.509115 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be66e59e-7621-4438-8c4c-5d5791a3f3f8" (UID: "be66e59e-7621-4438-8c4c-5d5791a3f3f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.514032 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-scripts" (OuterVolumeSpecName: "scripts") pod "be66e59e-7621-4438-8c4c-5d5791a3f3f8" (UID: "be66e59e-7621-4438-8c4c-5d5791a3f3f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.523212 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be66e59e-7621-4438-8c4c-5d5791a3f3f8-kube-api-access-bkvnp" (OuterVolumeSpecName: "kube-api-access-bkvnp") pod "be66e59e-7621-4438-8c4c-5d5791a3f3f8" (UID: "be66e59e-7621-4438-8c4c-5d5791a3f3f8"). InnerVolumeSpecName "kube-api-access-bkvnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.545333 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be66e59e-7621-4438-8c4c-5d5791a3f3f8" (UID: "be66e59e-7621-4438-8c4c-5d5791a3f3f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.565018 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:01:43 crc kubenswrapper[4727]: E1210 15:01:43.565270 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.608056 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be66e59e-7621-4438-8c4c-5d5791a3f3f8" (UID: "be66e59e-7621-4438-8c4c-5d5791a3f3f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.612364 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.612413 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.612616 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.612628 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.612641 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be66e59e-7621-4438-8c4c-5d5791a3f3f8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.612653 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkvnp\" (UniqueName: \"kubernetes.io/projected/be66e59e-7621-4438-8c4c-5d5791a3f3f8-kube-api-access-bkvnp\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.679555 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-config-data" (OuterVolumeSpecName: "config-data") pod "be66e59e-7621-4438-8c4c-5d5791a3f3f8" (UID: "be66e59e-7621-4438-8c4c-5d5791a3f3f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:43 crc kubenswrapper[4727]: I1210 15:01:43.715161 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be66e59e-7621-4438-8c4c-5d5791a3f3f8-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.044917 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.050323 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be66e59e-7621-4438-8c4c-5d5791a3f3f8","Type":"ContainerDied","Data":"040c2ccc911b40ea2da2c7ed1f4cd1816bc108bf2bb346ddcc3d01bf0eea562f"} Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.050435 4727 scope.go:117] "RemoveContainer" containerID="fb6e5a61ff936a6d199bbfcb2b378dc57b41deb874c425f35ac7d9b763fdf02f" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.088252 4727 scope.go:117] "RemoveContainer" containerID="ce34707cd5d751a596804fd58bd973206cc4477ef40f33cc3ef1b748fe5fb58b" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.100984 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.126978 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.150347 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:44 crc kubenswrapper[4727]: E1210 15:01:44.151015 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="ceilometer-central-agent" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.151040 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="ceilometer-central-agent" Dec 10 15:01:44 crc kubenswrapper[4727]: E1210 15:01:44.151084 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="sg-core" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.151092 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="sg-core" Dec 10 15:01:44 crc kubenswrapper[4727]: E1210 15:01:44.151111 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="ceilometer-notification-agent" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.151118 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="ceilometer-notification-agent" Dec 10 15:01:44 crc kubenswrapper[4727]: E1210 15:01:44.151126 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="proxy-httpd" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.151133 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="proxy-httpd" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.151434 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="proxy-httpd" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.151471 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="ceilometer-notification-agent" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.151482 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="sg-core" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.151491 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" containerName="ceilometer-central-agent" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.154299 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.158335 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.159300 4727 scope.go:117] "RemoveContainer" containerID="932455ff6e476d141d22f9cb369e7ac3975796b19874014c52da8e1222504607" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.159348 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.159663 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.164199 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.200263 4727 scope.go:117] "RemoveContainer" containerID="fd3850753860bffa8f1f01595dd4e45f9223c5a2265cfc3dc3d355829f67b3fd" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.299042 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.334217 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.334348 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-scripts\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.334394 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.334416 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-run-httpd\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.334467 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gqjm\" (UniqueName: \"kubernetes.io/projected/237fc16a-eb29-4279-8c3c-0348f883d1c4-kube-api-access-5gqjm\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.334501 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.334537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-config-data\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.334598 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-log-httpd\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.436285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-config-data\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.436416 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-log-httpd\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.436462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.436571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-scripts\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.436618 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.436638 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-run-httpd\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.436711 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gqjm\" (UniqueName: \"kubernetes.io/projected/237fc16a-eb29-4279-8c3c-0348f883d1c4-kube-api-access-5gqjm\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.436753 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.438133 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-run-httpd\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.438456 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-log-httpd\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.442409 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-config-data\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.442732 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-scripts\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.443153 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.443317 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.444081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.518516 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gqjm\" (UniqueName: \"kubernetes.io/projected/237fc16a-eb29-4279-8c3c-0348f883d1c4-kube-api-access-5gqjm\") pod \"ceilometer-0\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " pod="openstack/ceilometer-0" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.576524 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be66e59e-7621-4438-8c4c-5d5791a3f3f8" path="/var/lib/kubelet/pods/be66e59e-7621-4438-8c4c-5d5791a3f3f8/volumes" Dec 10 15:01:44 crc kubenswrapper[4727]: I1210 15:01:44.784622 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.275470 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.328479 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-rnv6s"] Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.341767 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-rnv6s"] Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.408659 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-dxhgk"] Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.410657 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.418656 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.441106 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.441420 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.445117 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-dxhgk"] Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.562738 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64cfea48-c6f9-4698-a328-62937b40c2db-scripts\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.562927 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64cfea48-c6f9-4698-a328-62937b40c2db-config-data\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.562975 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/64cfea48-c6f9-4698-a328-62937b40c2db-certs\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.563136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsk25\" (UniqueName: \"kubernetes.io/projected/64cfea48-c6f9-4698-a328-62937b40c2db-kube-api-access-hsk25\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.563168 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64cfea48-c6f9-4698-a328-62937b40c2db-combined-ca-bundle\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.667524 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsk25\" (UniqueName: \"kubernetes.io/projected/64cfea48-c6f9-4698-a328-62937b40c2db-kube-api-access-hsk25\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.667595 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64cfea48-c6f9-4698-a328-62937b40c2db-combined-ca-bundle\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.667704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64cfea48-c6f9-4698-a328-62937b40c2db-scripts\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.667829 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64cfea48-c6f9-4698-a328-62937b40c2db-config-data\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.667866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/64cfea48-c6f9-4698-a328-62937b40c2db-certs\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.674072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/64cfea48-c6f9-4698-a328-62937b40c2db-certs\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.674721 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64cfea48-c6f9-4698-a328-62937b40c2db-combined-ca-bundle\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.679297 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64cfea48-c6f9-4698-a328-62937b40c2db-scripts\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.679825 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64cfea48-c6f9-4698-a328-62937b40c2db-config-data\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.704884 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsk25\" (UniqueName: \"kubernetes.io/projected/64cfea48-c6f9-4698-a328-62937b40c2db-kube-api-access-hsk25\") pod \"cloudkitty-db-sync-dxhgk\" (UID: \"64cfea48-c6f9-4698-a328-62937b40c2db\") " pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.753496 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-dxhgk" Dec 10 15:01:45 crc kubenswrapper[4727]: I1210 15:01:45.822330 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 10 15:01:46 crc kubenswrapper[4727]: I1210 15:01:46.088217 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerStarted","Data":"a3aceaca98bc910e8b928a9da7eb5b6bf2dccb445ad55afc2ac332e14b81c0b1"} Dec 10 15:01:46 crc kubenswrapper[4727]: I1210 15:01:46.346199 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-dxhgk"] Dec 10 15:01:46 crc kubenswrapper[4727]: E1210 15:01:46.473605 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:01:46 crc kubenswrapper[4727]: E1210 15:01:46.473681 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:01:46 crc kubenswrapper[4727]: E1210 15:01:46.473829 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:01:46 crc kubenswrapper[4727]: E1210 15:01:46.475418 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:01:46 crc kubenswrapper[4727]: I1210 15:01:46.580161 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef9b442-2a15-4657-b111-5af4a72d39e4" path="/var/lib/kubelet/pods/8ef9b442-2a15-4657-b111-5af4a72d39e4/volumes" Dec 10 15:01:47 crc kubenswrapper[4727]: I1210 15:01:47.102794 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerStarted","Data":"e98f516ef35d35ebcd9a7c7f55042856e7a962016cfb0fbeedffcfb5696ab522"} Dec 10 15:01:47 crc kubenswrapper[4727]: I1210 15:01:47.103126 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerStarted","Data":"5ee6881012eba85e05ee61b284345499bf243279933486278dba4f4cca2e0aa5"} Dec 10 15:01:47 crc kubenswrapper[4727]: I1210 15:01:47.104449 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-dxhgk" event={"ID":"64cfea48-c6f9-4698-a328-62937b40c2db","Type":"ContainerStarted","Data":"3498becedde89b51872d5b8dfd37cf3670043a55c6bd689bb0a3dc973c1f56b5"} Dec 10 15:01:47 crc kubenswrapper[4727]: E1210 15:01:47.108810 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:01:48 crc kubenswrapper[4727]: I1210 15:01:48.132057 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerStarted","Data":"6eac0df4ef5b05d6ffac026929e55b8e44bdad184f2d2d2f0e84ba226e4d38aa"} Dec 10 15:01:48 crc kubenswrapper[4727]: E1210 15:01:48.135041 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:01:48 crc kubenswrapper[4727]: I1210 15:01:48.206763 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:01:49 crc kubenswrapper[4727]: I1210 15:01:49.061668 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:01:49 crc kubenswrapper[4727]: I1210 15:01:49.298805 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 15:01:49 crc kubenswrapper[4727]: I1210 15:01:49.381523 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 15:01:49 crc kubenswrapper[4727]: I1210 15:01:49.599226 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:50 crc kubenswrapper[4727]: I1210 15:01:50.155058 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerStarted","Data":"8be80e9fb1e4821a57c5b28b1ef61861b4e4f92514cdc7bc5ba6d57e36c18a1a"} Dec 10 15:01:50 crc kubenswrapper[4727]: I1210 15:01:50.196523 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 15:01:50 crc kubenswrapper[4727]: I1210 15:01:50.212714 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.368298459 podStartE2EDuration="6.21268517s" podCreationTimestamp="2025-12-10 15:01:44 +0000 UTC" firstStartedPulling="2025-12-10 15:01:45.291293189 +0000 UTC m=+1809.486067741" lastFinishedPulling="2025-12-10 15:01:49.13567991 +0000 UTC m=+1813.330454452" observedRunningTime="2025-12-10 15:01:50.207056244 +0000 UTC m=+1814.401830796" watchObservedRunningTime="2025-12-10 15:01:50.21268517 +0000 UTC m=+1814.407459742" Dec 10 15:01:50 crc kubenswrapper[4727]: I1210 15:01:50.428947 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:01:50 crc kubenswrapper[4727]: I1210 15:01:50.429022 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:01:50 crc kubenswrapper[4727]: I1210 15:01:50.440898 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:01:50 crc kubenswrapper[4727]: I1210 15:01:50.440965 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:01:51 crc kubenswrapper[4727]: I1210 15:01:51.165821 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="ceilometer-central-agent" containerID="cri-o://5ee6881012eba85e05ee61b284345499bf243279933486278dba4f4cca2e0aa5" gracePeriod=30 Dec 10 15:01:51 crc kubenswrapper[4727]: I1210 15:01:51.167527 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:01:51 crc kubenswrapper[4727]: I1210 15:01:51.168299 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="sg-core" containerID="cri-o://6eac0df4ef5b05d6ffac026929e55b8e44bdad184f2d2d2f0e84ba226e4d38aa" gracePeriod=30 Dec 10 15:01:51 crc kubenswrapper[4727]: I1210 15:01:51.168320 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="ceilometer-notification-agent" containerID="cri-o://e98f516ef35d35ebcd9a7c7f55042856e7a962016cfb0fbeedffcfb5696ab522" gracePeriod=30 Dec 10 15:01:51 crc kubenswrapper[4727]: I1210 15:01:51.168422 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="proxy-httpd" containerID="cri-o://8be80e9fb1e4821a57c5b28b1ef61861b4e4f92514cdc7bc5ba6d57e36c18a1a" gracePeriod=30 Dec 10 15:01:51 crc kubenswrapper[4727]: I1210 15:01:51.472211 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4a402d8e-da25-43fd-a5c8-bb0588d544ab" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:51 crc kubenswrapper[4727]: I1210 15:01:51.472429 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="86934207-9ef6-488d-8f95-4ab3ad0c5fc7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:51 crc kubenswrapper[4727]: I1210 15:01:51.472743 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="86934207-9ef6-488d-8f95-4ab3ad0c5fc7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:51 crc kubenswrapper[4727]: I1210 15:01:51.472781 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4a402d8e-da25-43fd-a5c8-bb0588d544ab" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:52 crc kubenswrapper[4727]: I1210 15:01:52.184070 4727 generic.go:334] "Generic (PLEG): container finished" podID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerID="8be80e9fb1e4821a57c5b28b1ef61861b4e4f92514cdc7bc5ba6d57e36c18a1a" exitCode=0 Dec 10 15:01:52 crc kubenswrapper[4727]: I1210 15:01:52.184321 4727 generic.go:334] "Generic (PLEG): container finished" podID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerID="6eac0df4ef5b05d6ffac026929e55b8e44bdad184f2d2d2f0e84ba226e4d38aa" exitCode=2 Dec 10 15:01:52 crc kubenswrapper[4727]: I1210 15:01:52.184143 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerDied","Data":"8be80e9fb1e4821a57c5b28b1ef61861b4e4f92514cdc7bc5ba6d57e36c18a1a"} Dec 10 15:01:52 crc kubenswrapper[4727]: I1210 15:01:52.184363 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerDied","Data":"6eac0df4ef5b05d6ffac026929e55b8e44bdad184f2d2d2f0e84ba226e4d38aa"} Dec 10 15:01:52 crc kubenswrapper[4727]: I1210 15:01:52.184378 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerDied","Data":"e98f516ef35d35ebcd9a7c7f55042856e7a962016cfb0fbeedffcfb5696ab522"} Dec 10 15:01:52 crc kubenswrapper[4727]: I1210 15:01:52.184330 4727 generic.go:334] "Generic (PLEG): container finished" podID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerID="e98f516ef35d35ebcd9a7c7f55042856e7a962016cfb0fbeedffcfb5696ab522" exitCode=0 Dec 10 15:01:54 crc kubenswrapper[4727]: I1210 15:01:54.627058 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="28ce859f-f595-4f9a-ad5d-1131acd951c7" containerName="rabbitmq" containerID="cri-o://23a3f6ec9e03d757e9895cc9cc380b400cb166fdfc4adc24fc70e95f4a2e1aea" gracePeriod=604794 Dec 10 15:01:55 crc kubenswrapper[4727]: I1210 15:01:55.184619 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8216a031-5caf-4b21-9613-c798dd35dfb7" containerName="rabbitmq" containerID="cri-o://acbcdaad2623c3eff1e8655b2176c4ebaec9fd19791231fb76d3a812ca13a55c" gracePeriod=604794 Dec 10 15:01:57 crc kubenswrapper[4727]: I1210 15:01:57.564070 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:01:57 crc kubenswrapper[4727]: E1210 15:01:57.564732 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:01:59 crc kubenswrapper[4727]: E1210 15:01:59.687099 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:01:59 crc kubenswrapper[4727]: E1210 15:01:59.688830 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:01:59 crc kubenswrapper[4727]: E1210 15:01:59.689149 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:01:59 crc kubenswrapper[4727]: E1210 15:01:59.690396 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:02:00 crc kubenswrapper[4727]: I1210 15:02:00.437647 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:02:00 crc kubenswrapper[4727]: I1210 15:02:00.438193 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:02:00 crc kubenswrapper[4727]: I1210 15:02:00.447426 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:02:00 crc kubenswrapper[4727]: I1210 15:02:00.447493 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:02:00 crc kubenswrapper[4727]: I1210 15:02:00.447876 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:02:00 crc kubenswrapper[4727]: I1210 15:02:00.449431 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:02:00 crc kubenswrapper[4727]: I1210 15:02:00.454053 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:02:01 crc kubenswrapper[4727]: I1210 15:02:01.299159 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:02:01 crc kubenswrapper[4727]: I1210 15:02:01.304978 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:02:01 crc kubenswrapper[4727]: I1210 15:02:01.306124 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.320740 4727 generic.go:334] "Generic (PLEG): container finished" podID="8216a031-5caf-4b21-9613-c798dd35dfb7" containerID="acbcdaad2623c3eff1e8655b2176c4ebaec9fd19791231fb76d3a812ca13a55c" exitCode=0 Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.320817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8216a031-5caf-4b21-9613-c798dd35dfb7","Type":"ContainerDied","Data":"acbcdaad2623c3eff1e8655b2176c4ebaec9fd19791231fb76d3a812ca13a55c"} Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.321185 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8216a031-5caf-4b21-9613-c798dd35dfb7","Type":"ContainerDied","Data":"adabb6c3cdc1b7cc67865cfdb6650faa2826c10c322522f62e89b9f92da846c6"} Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.321204 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adabb6c3cdc1b7cc67865cfdb6650faa2826c10c322522f62e89b9f92da846c6" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.329401 4727 generic.go:334] "Generic (PLEG): container finished" podID="28ce859f-f595-4f9a-ad5d-1131acd951c7" containerID="23a3f6ec9e03d757e9895cc9cc380b400cb166fdfc4adc24fc70e95f4a2e1aea" exitCode=0 Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.329609 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28ce859f-f595-4f9a-ad5d-1131acd951c7","Type":"ContainerDied","Data":"23a3f6ec9e03d757e9895cc9cc380b400cb166fdfc4adc24fc70e95f4a2e1aea"} Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.329696 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28ce859f-f595-4f9a-ad5d-1131acd951c7","Type":"ContainerDied","Data":"f5be6253d1674772a5c7278e5b4443d1a18afefcc8a9b33c205c30ab8a752acf"} Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.329736 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5be6253d1674772a5c7278e5b4443d1a18afefcc8a9b33c205c30ab8a752acf" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.525805 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.538466 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.561786 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-config-data\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624101 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624459 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-server-conf\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624508 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8216a031-5caf-4b21-9613-c798dd35dfb7-pod-info\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624554 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnp6q\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-kube-api-access-bnp6q\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624617 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28ce859f-f595-4f9a-ad5d-1131acd951c7-pod-info\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624686 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-server-conf\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624745 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28ce859f-f595-4f9a-ad5d-1131acd951c7-erlang-cookie-secret\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624827 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-erlang-cookie\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624854 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-config-data\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624884 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-confd\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.624935 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-tls\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.633858 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.634056 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz5s9\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-kube-api-access-nz5s9\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.634100 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-tls\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.634142 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-confd\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.634181 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8216a031-5caf-4b21-9613-c798dd35dfb7-erlang-cookie-secret\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.634221 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-plugins\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.634297 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-plugins\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.634323 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-erlang-cookie\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.634374 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-plugins-conf\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.634417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-plugins-conf\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.665590 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.668148 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.668551 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.673494 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.679875 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-kube-api-access-nz5s9" (OuterVolumeSpecName: "kube-api-access-nz5s9") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "kube-api-access-nz5s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.684204 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: E1210 15:02:02.689778 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986 podName:28ce859f-f595-4f9a-ad5d-1131acd951c7 nodeName:}" failed. No retries permitted until 2025-12-10 15:02:03.189749081 +0000 UTC m=+1827.384523623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.709956 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-config-data" (OuterVolumeSpecName: "config-data") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.726182 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.727653 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.728014 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-kube-api-access-bnp6q" (OuterVolumeSpecName: "kube-api-access-bnp6q") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "kube-api-access-bnp6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.728160 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.741212 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8216a031-5caf-4b21-9613-c798dd35dfb7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.743688 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/28ce859f-f595-4f9a-ad5d-1131acd951c7-pod-info" (OuterVolumeSpecName: "pod-info") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.743842 4727 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28ce859f-f595-4f9a-ad5d-1131acd951c7-pod-info\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.743888 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.743924 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.743938 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.743948 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz5s9\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-kube-api-access-nz5s9\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.743962 4727 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8216a031-5caf-4b21-9613-c798dd35dfb7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.743976 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.743987 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.744000 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.744013 4727 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.744035 4727 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.744051 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.744070 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnp6q\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-kube-api-access-bnp6q\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.754246 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ce859f-f595-4f9a-ad5d-1131acd951c7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.767212 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8216a031-5caf-4b21-9613-c798dd35dfb7-pod-info" (OuterVolumeSpecName: "pod-info") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.775137 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-config-data" (OuterVolumeSpecName: "config-data") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: E1210 15:02:02.776221 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2 podName:8216a031-5caf-4b21-9613-c798dd35dfb7 nodeName:}" failed. No retries permitted until 2025-12-10 15:02:03.276198 +0000 UTC m=+1827.470972542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.789644 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-server-conf" (OuterVolumeSpecName: "server-conf") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.836467 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-server-conf" (OuterVolumeSpecName: "server-conf") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.849294 4727 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8216a031-5caf-4b21-9613-c798dd35dfb7-server-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.850540 4727 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8216a031-5caf-4b21-9613-c798dd35dfb7-pod-info\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.850561 4727 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-server-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.850698 4727 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28ce859f-f595-4f9a-ad5d-1131acd951c7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.850718 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28ce859f-f595-4f9a-ad5d-1131acd951c7-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.914389 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.952018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.952324 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-confd\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.952897 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28ce859f-f595-4f9a-ad5d-1131acd951c7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4727]: W1210 15:02:02.953014 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8216a031-5caf-4b21-9613-c798dd35dfb7/volumes/kubernetes.io~projected/rabbitmq-confd Dec 10 15:02:02 crc kubenswrapper[4727]: I1210 15:02:02.953077 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.055926 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8216a031-5caf-4b21-9613-c798dd35dfb7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.260743 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"28ce859f-f595-4f9a-ad5d-1131acd951c7\" (UID: \"28ce859f-f595-4f9a-ad5d-1131acd951c7\") " Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.282883 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986" (OuterVolumeSpecName: "persistence") pod "28ce859f-f595-4f9a-ad5d-1131acd951c7" (UID: "28ce859f-f595-4f9a-ad5d-1131acd951c7"). InnerVolumeSpecName "pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.350031 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.350449 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.364314 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"8216a031-5caf-4b21-9613-c798dd35dfb7\" (UID: \"8216a031-5caf-4b21-9613-c798dd35dfb7\") " Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.366995 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") on node \"crc\" " Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.404975 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2" (OuterVolumeSpecName: "persistence") pod "8216a031-5caf-4b21-9613-c798dd35dfb7" (UID: "8216a031-5caf-4b21-9613-c798dd35dfb7"). InnerVolumeSpecName "pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.414653 4727 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.414807 4727 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986") on node "crc" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.416456 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.435382 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.451355 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:02:03 crc kubenswrapper[4727]: E1210 15:02:03.451992 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ce859f-f595-4f9a-ad5d-1131acd951c7" containerName="rabbitmq" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.452019 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ce859f-f595-4f9a-ad5d-1131acd951c7" containerName="rabbitmq" Dec 10 15:02:03 crc kubenswrapper[4727]: E1210 15:02:03.452039 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8216a031-5caf-4b21-9613-c798dd35dfb7" containerName="setup-container" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.452049 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8216a031-5caf-4b21-9613-c798dd35dfb7" containerName="setup-container" Dec 10 15:02:03 crc kubenswrapper[4727]: E1210 15:02:03.452071 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ce859f-f595-4f9a-ad5d-1131acd951c7" containerName="setup-container" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.452079 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ce859f-f595-4f9a-ad5d-1131acd951c7" containerName="setup-container" Dec 10 15:02:03 crc kubenswrapper[4727]: E1210 15:02:03.452128 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8216a031-5caf-4b21-9613-c798dd35dfb7" containerName="rabbitmq" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.452137 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8216a031-5caf-4b21-9613-c798dd35dfb7" containerName="rabbitmq" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.452389 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8216a031-5caf-4b21-9613-c798dd35dfb7" containerName="rabbitmq" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.452428 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ce859f-f595-4f9a-ad5d-1131acd951c7" containerName="rabbitmq" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.454043 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.456608 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.456783 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.457076 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.457221 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.457370 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.458947 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gl8gb" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.459028 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.469577 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") on node \"crc\" " Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.469620 4727 reconciler_common.go:293] "Volume detached for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.477062 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.664022 4727 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.665056 4727 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2") on node "crc" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.690544 4727 reconciler_common.go:293] "Volume detached for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.702230 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.729407 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.766036 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.768354 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.772084 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.772273 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.772377 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.772525 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.772627 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4glfc" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.772690 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.772965 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.779866 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794045 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-config-data\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794166 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794205 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794231 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdrd\" (UniqueName: \"kubernetes.io/projected/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-kube-api-access-whdrd\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794337 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794391 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794479 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794502 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794559 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.794593 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897422 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897491 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897593 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897636 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-config-data\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897686 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897751 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897778 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whdrd\" (UniqueName: \"kubernetes.io/projected/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-kube-api-access-whdrd\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897811 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.897894 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898057 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898155 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898188 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898221 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898338 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898362 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898423 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.898483 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2xk7\" (UniqueName: \"kubernetes.io/projected/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-kube-api-access-b2xk7\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.899035 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-config-data\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.899170 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.899340 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.899891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.900983 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.902358 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.902401 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/71300f1ee2a8cff2dcea612b02795bd22bb9b4f3ccfc60fa5e061f401c587a7e/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.905076 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.905791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.908550 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.910376 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.922740 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whdrd\" (UniqueName: \"kubernetes.io/projected/8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c-kube-api-access-whdrd\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:03 crc kubenswrapper[4727]: I1210 15:02:03.958844 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ecbc5ca-6e4d-4cdb-bbde-fc248283e986\") pod \"rabbitmq-server-0\" (UID: \"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c\") " pod="openstack/rabbitmq-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.000803 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.000872 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.000947 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.000996 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.001031 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.001099 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.001123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.001176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.001220 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2xk7\" (UniqueName: \"kubernetes.io/projected/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-kube-api-access-b2xk7\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.001285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.001306 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.003311 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.003803 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.003841 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.003844 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.006311 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.009666 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.010028 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.010375 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.013391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.013962 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.014003 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1dbed145fca7d880429c0190be0a1203fa2f5dbc05f2cc520d9aa531cc00aeeb/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.025531 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2xk7\" (UniqueName: \"kubernetes.io/projected/6bb6d576-35c9-4ce9-9fa0-6cef4f513739-kube-api-access-b2xk7\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.064343 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ed104a-bf19-466a-be6e-410e7b4ce3a2\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bb6d576-35c9-4ce9-9fa0-6cef4f513739\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.100790 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.115114 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.600459 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ce859f-f595-4f9a-ad5d-1131acd951c7" path="/var/lib/kubelet/pods/28ce859f-f595-4f9a-ad5d-1131acd951c7/volumes" Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.604302 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8216a031-5caf-4b21-9613-c798dd35dfb7" path="/var/lib/kubelet/pods/8216a031-5caf-4b21-9613-c798dd35dfb7/volumes" Dec 10 15:02:04 crc kubenswrapper[4727]: W1210 15:02:04.742099 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bb6d576_35c9_4ce9_9fa0_6cef4f513739.slice/crio-0296aa18e2fee54b468e5b050d1c724308be1fa68a0adfbc47d3d525c1909a7f WatchSource:0}: Error finding container 0296aa18e2fee54b468e5b050d1c724308be1fa68a0adfbc47d3d525c1909a7f: Status 404 returned error can't find the container with id 0296aa18e2fee54b468e5b050d1c724308be1fa68a0adfbc47d3d525c1909a7f Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.742796 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:02:04 crc kubenswrapper[4727]: I1210 15:02:04.770477 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:02:05 crc kubenswrapper[4727]: I1210 15:02:05.374350 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bb6d576-35c9-4ce9-9fa0-6cef4f513739","Type":"ContainerStarted","Data":"0296aa18e2fee54b468e5b050d1c724308be1fa68a0adfbc47d3d525c1909a7f"} Dec 10 15:02:05 crc kubenswrapper[4727]: I1210 15:02:05.376147 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c","Type":"ContainerStarted","Data":"ef16c568f424d6d10f4f7783ce7cf329bd59da881b3b322cfaa50ef769b50349"} Dec 10 15:02:07 crc kubenswrapper[4727]: I1210 15:02:07.402177 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bb6d576-35c9-4ce9-9fa0-6cef4f513739","Type":"ContainerStarted","Data":"85ebad7cb0172c7485b4ce0439aa844e53fe0cb0c013f1bcb41e3efe7c5af244"} Dec 10 15:02:07 crc kubenswrapper[4727]: I1210 15:02:07.405355 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c","Type":"ContainerStarted","Data":"cacf76e1f5c8a5dd92da0a7d51a4990823582691d30eaf732b2964688835233c"} Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.003211 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-kbp24"] Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.006294 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.011059 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.031645 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-kbp24"] Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.086733 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.086872 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2rsd\" (UniqueName: \"kubernetes.io/projected/6351abca-bdee-4f6a-aff3-8673ff834122-kube-api-access-g2rsd\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.086997 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.087057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.087159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.087233 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.087291 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-config\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.342866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.343104 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.344510 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.344563 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.344557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.344652 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.344723 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.344788 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-config\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.344888 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.345069 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2rsd\" (UniqueName: \"kubernetes.io/projected/6351abca-bdee-4f6a-aff3-8673ff834122-kube-api-access-g2rsd\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.346269 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.347056 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.350554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-config\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.394093 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2rsd\" (UniqueName: \"kubernetes.io/projected/6351abca-bdee-4f6a-aff3-8673ff834122-kube-api-access-g2rsd\") pod \"dnsmasq-dns-dbb88bf8c-kbp24\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.563981 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:02:08 crc kubenswrapper[4727]: E1210 15:02:08.564337 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:02:08 crc kubenswrapper[4727]: I1210 15:02:08.631247 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:09 crc kubenswrapper[4727]: I1210 15:02:09.128533 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-kbp24"] Dec 10 15:02:09 crc kubenswrapper[4727]: W1210 15:02:09.133070 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6351abca_bdee_4f6a_aff3_8673ff834122.slice/crio-21efce07c69ec6fd9bede8aec4ce6ed31aa5100e3dea0c56d8d936c071f0ccb5 WatchSource:0}: Error finding container 21efce07c69ec6fd9bede8aec4ce6ed31aa5100e3dea0c56d8d936c071f0ccb5: Status 404 returned error can't find the container with id 21efce07c69ec6fd9bede8aec4ce6ed31aa5100e3dea0c56d8d936c071f0ccb5 Dec 10 15:02:09 crc kubenswrapper[4727]: I1210 15:02:09.466014 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" event={"ID":"6351abca-bdee-4f6a-aff3-8673ff834122","Type":"ContainerStarted","Data":"21efce07c69ec6fd9bede8aec4ce6ed31aa5100e3dea0c56d8d936c071f0ccb5"} Dec 10 15:02:10 crc kubenswrapper[4727]: I1210 15:02:10.478493 4727 generic.go:334] "Generic (PLEG): container finished" podID="6351abca-bdee-4f6a-aff3-8673ff834122" containerID="794d04e11ac7de5ac8aaed9fd462c1f883eb2fb995e4a2857bd6190e87f69503" exitCode=0 Dec 10 15:02:10 crc kubenswrapper[4727]: I1210 15:02:10.478572 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" event={"ID":"6351abca-bdee-4f6a-aff3-8673ff834122","Type":"ContainerDied","Data":"794d04e11ac7de5ac8aaed9fd462c1f883eb2fb995e4a2857bd6190e87f69503"} Dec 10 15:02:10 crc kubenswrapper[4727]: E1210 15:02:10.590034 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:02:11 crc kubenswrapper[4727]: I1210 15:02:11.493029 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" event={"ID":"6351abca-bdee-4f6a-aff3-8673ff834122","Type":"ContainerStarted","Data":"bdec14fb6d043b1d1a978ea07e1629720c2b17fab043efd2fef81b548e75dcf3"} Dec 10 15:02:11 crc kubenswrapper[4727]: I1210 15:02:11.493507 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:11 crc kubenswrapper[4727]: I1210 15:02:11.531539 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" podStartSLOduration=4.531515852 podStartE2EDuration="4.531515852s" podCreationTimestamp="2025-12-10 15:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:11.516015207 +0000 UTC m=+1835.710790019" watchObservedRunningTime="2025-12-10 15:02:11.531515852 +0000 UTC m=+1835.726290394" Dec 10 15:02:14 crc kubenswrapper[4727]: I1210 15:02:14.788240 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.229:3000/\": dial tcp 10.217.0.229:3000: connect: connection refused" Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.135795 4727 scope.go:117] "RemoveContainer" containerID="8541719ee2bedad7a365e3c9dcbcc2e7fccae5a8505315d8a761a4f0bd1773e3" Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.169188 4727 scope.go:117] "RemoveContainer" containerID="3e6a90c7af245bf5101ab8d0a5c85b5879fb7ea4429f8a77715634231728acc3" Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.325843 4727 scope.go:117] "RemoveContainer" containerID="acbcdaad2623c3eff1e8655b2176c4ebaec9fd19791231fb76d3a812ca13a55c" Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.380712 4727 scope.go:117] "RemoveContainer" containerID="23a3f6ec9e03d757e9895cc9cc380b400cb166fdfc4adc24fc70e95f4a2e1aea" Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.448472 4727 scope.go:117] "RemoveContainer" containerID="57f6cbe7e1b94d02d13977056bd83bec17acf47ee00d1a7c49165a2f91a66058" Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.633871 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.709185 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mmbg5"] Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.709490 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" podUID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" containerName="dnsmasq-dns" containerID="cri-o://a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4" gracePeriod=10 Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.906981 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-q82js"] Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.910003 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:18 crc kubenswrapper[4727]: I1210 15:02:18.944834 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-q82js"] Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.034329 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-config\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.034455 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-dns-svc\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.034528 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.034761 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.034865 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.034942 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7v7m\" (UniqueName: \"kubernetes.io/projected/70678c11-c77a-4d32-a1aa-9c4c43140b2f-kube-api-access-k7v7m\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.035014 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.137261 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.137317 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7v7m\" (UniqueName: \"kubernetes.io/projected/70678c11-c77a-4d32-a1aa-9c4c43140b2f-kube-api-access-k7v7m\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.137351 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.137445 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-config\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.137501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-dns-svc\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.137553 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.137643 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.138500 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.139640 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.142050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.143106 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-dns-svc\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.143493 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.153291 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70678c11-c77a-4d32-a1aa-9c4c43140b2f-config\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.189949 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7v7m\" (UniqueName: \"kubernetes.io/projected/70678c11-c77a-4d32-a1aa-9c4c43140b2f-kube-api-access-k7v7m\") pod \"dnsmasq-dns-85f64749dc-q82js\" (UID: \"70678c11-c77a-4d32-a1aa-9c4c43140b2f\") " pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.242260 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.563769 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:02:19 crc kubenswrapper[4727]: E1210 15:02:19.564632 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.593887 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.603649 4727 generic.go:334] "Generic (PLEG): container finished" podID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" containerID="a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4" exitCode=0 Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.603697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" event={"ID":"46dda6a1-0c7e-4ea5-9ec5-aab327344ace","Type":"ContainerDied","Data":"a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4"} Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.603721 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.603728 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" event={"ID":"46dda6a1-0c7e-4ea5-9ec5-aab327344ace","Type":"ContainerDied","Data":"8ab55a4ef19e5fc6f844421de274ca36ca859a192200c475cbf3c9f6e26ad92f"} Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.603741 4727 scope.go:117] "RemoveContainer" containerID="a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.664663 4727 scope.go:117] "RemoveContainer" containerID="a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.668659 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-config\") pod \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.668722 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-sb\") pod \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.668854 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl5lt\" (UniqueName: \"kubernetes.io/projected/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-kube-api-access-kl5lt\") pod \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.669013 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-nb\") pod \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.669079 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-svc\") pod \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.669155 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-swift-storage-0\") pod \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\" (UID: \"46dda6a1-0c7e-4ea5-9ec5-aab327344ace\") " Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.688932 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-kube-api-access-kl5lt" (OuterVolumeSpecName: "kube-api-access-kl5lt") pod "46dda6a1-0c7e-4ea5-9ec5-aab327344ace" (UID: "46dda6a1-0c7e-4ea5-9ec5-aab327344ace"). InnerVolumeSpecName "kube-api-access-kl5lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.781261 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl5lt\" (UniqueName: \"kubernetes.io/projected/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-kube-api-access-kl5lt\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.799394 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46dda6a1-0c7e-4ea5-9ec5-aab327344ace" (UID: "46dda6a1-0c7e-4ea5-9ec5-aab327344ace"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.813230 4727 scope.go:117] "RemoveContainer" containerID="a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4" Dec 10 15:02:19 crc kubenswrapper[4727]: E1210 15:02:19.813811 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4\": container with ID starting with a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4 not found: ID does not exist" containerID="a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.813844 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4"} err="failed to get container status \"a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4\": rpc error: code = NotFound desc = could not find container \"a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4\": container with ID starting with a6185dccfad3e2d7b0ef60c87acb51e01d4c14cf32503bd0c5ebba7b441e79f4 not found: ID does not exist" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.813892 4727 scope.go:117] "RemoveContainer" containerID="a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25" Dec 10 15:02:19 crc kubenswrapper[4727]: E1210 15:02:19.814257 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25\": container with ID starting with a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25 not found: ID does not exist" containerID="a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.814276 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25"} err="failed to get container status \"a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25\": rpc error: code = NotFound desc = could not find container \"a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25\": container with ID starting with a39c45f63c2023b4e5026ee2b0ac6c6be6ec72e895c4047e3b8dff293ba0ea25 not found: ID does not exist" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.826114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46dda6a1-0c7e-4ea5-9ec5-aab327344ace" (UID: "46dda6a1-0c7e-4ea5-9ec5-aab327344ace"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.827688 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-config" (OuterVolumeSpecName: "config") pod "46dda6a1-0c7e-4ea5-9ec5-aab327344ace" (UID: "46dda6a1-0c7e-4ea5-9ec5-aab327344ace"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.844060 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "46dda6a1-0c7e-4ea5-9ec5-aab327344ace" (UID: "46dda6a1-0c7e-4ea5-9ec5-aab327344ace"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.854627 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46dda6a1-0c7e-4ea5-9ec5-aab327344ace" (UID: "46dda6a1-0c7e-4ea5-9ec5-aab327344ace"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.883242 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.883295 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.883310 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.883322 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.883333 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46dda6a1-0c7e-4ea5-9ec5-aab327344ace-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.933647 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-q82js"] Dec 10 15:02:19 crc kubenswrapper[4727]: W1210 15:02:19.936258 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70678c11_c77a_4d32_a1aa_9c4c43140b2f.slice/crio-0dc8594aca2726e026d96c495ee3bf6a8052b368e7d301ea374ba01c8af5fcae WatchSource:0}: Error finding container 0dc8594aca2726e026d96c495ee3bf6a8052b368e7d301ea374ba01c8af5fcae: Status 404 returned error can't find the container with id 0dc8594aca2726e026d96c495ee3bf6a8052b368e7d301ea374ba01c8af5fcae Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.968603 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mmbg5"] Dec 10 15:02:19 crc kubenswrapper[4727]: I1210 15:02:19.982670 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mmbg5"] Dec 10 15:02:20 crc kubenswrapper[4727]: I1210 15:02:20.590206 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" path="/var/lib/kubelet/pods/46dda6a1-0c7e-4ea5-9ec5-aab327344ace/volumes" Dec 10 15:02:20 crc kubenswrapper[4727]: I1210 15:02:20.618039 4727 generic.go:334] "Generic (PLEG): container finished" podID="70678c11-c77a-4d32-a1aa-9c4c43140b2f" containerID="9306760e27349545ad786ba9480c25fefb0d4506beb248f91f70ce4578ac05bc" exitCode=0 Dec 10 15:02:20 crc kubenswrapper[4727]: I1210 15:02:20.618088 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-q82js" event={"ID":"70678c11-c77a-4d32-a1aa-9c4c43140b2f","Type":"ContainerDied","Data":"9306760e27349545ad786ba9480c25fefb0d4506beb248f91f70ce4578ac05bc"} Dec 10 15:02:20 crc kubenswrapper[4727]: I1210 15:02:20.618118 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-q82js" event={"ID":"70678c11-c77a-4d32-a1aa-9c4c43140b2f","Type":"ContainerStarted","Data":"0dc8594aca2726e026d96c495ee3bf6a8052b368e7d301ea374ba01c8af5fcae"} Dec 10 15:02:21 crc kubenswrapper[4727]: I1210 15:02:21.639329 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-q82js" event={"ID":"70678c11-c77a-4d32-a1aa-9c4c43140b2f","Type":"ContainerStarted","Data":"b2fed759e1838d42d40a9db0e8328d84777b758076f23ecea9a9607774da4ef1"} Dec 10 15:02:21 crc kubenswrapper[4727]: I1210 15:02:21.639684 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:21 crc kubenswrapper[4727]: I1210 15:02:21.649457 4727 generic.go:334] "Generic (PLEG): container finished" podID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerID="5ee6881012eba85e05ee61b284345499bf243279933486278dba4f4cca2e0aa5" exitCode=137 Dec 10 15:02:21 crc kubenswrapper[4727]: I1210 15:02:21.649530 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerDied","Data":"5ee6881012eba85e05ee61b284345499bf243279933486278dba4f4cca2e0aa5"} Dec 10 15:02:21 crc kubenswrapper[4727]: I1210 15:02:21.671268 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f64749dc-q82js" podStartSLOduration=3.671243286 podStartE2EDuration="3.671243286s" podCreationTimestamp="2025-12-10 15:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:21.670300103 +0000 UTC m=+1845.865074655" watchObservedRunningTime="2025-12-10 15:02:21.671243286 +0000 UTC m=+1845.866017828" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.072249 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.241067 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-log-httpd\") pod \"237fc16a-eb29-4279-8c3c-0348f883d1c4\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.241553 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-run-httpd\") pod \"237fc16a-eb29-4279-8c3c-0348f883d1c4\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.241582 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-config-data\") pod \"237fc16a-eb29-4279-8c3c-0348f883d1c4\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.241639 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-scripts\") pod \"237fc16a-eb29-4279-8c3c-0348f883d1c4\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.241728 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-combined-ca-bundle\") pod \"237fc16a-eb29-4279-8c3c-0348f883d1c4\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.241741 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "237fc16a-eb29-4279-8c3c-0348f883d1c4" (UID: "237fc16a-eb29-4279-8c3c-0348f883d1c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.241786 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gqjm\" (UniqueName: \"kubernetes.io/projected/237fc16a-eb29-4279-8c3c-0348f883d1c4-kube-api-access-5gqjm\") pod \"237fc16a-eb29-4279-8c3c-0348f883d1c4\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.241865 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-sg-core-conf-yaml\") pod \"237fc16a-eb29-4279-8c3c-0348f883d1c4\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.241985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-ceilometer-tls-certs\") pod \"237fc16a-eb29-4279-8c3c-0348f883d1c4\" (UID: \"237fc16a-eb29-4279-8c3c-0348f883d1c4\") " Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.242613 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.247651 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "237fc16a-eb29-4279-8c3c-0348f883d1c4" (UID: "237fc16a-eb29-4279-8c3c-0348f883d1c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.247700 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-scripts" (OuterVolumeSpecName: "scripts") pod "237fc16a-eb29-4279-8c3c-0348f883d1c4" (UID: "237fc16a-eb29-4279-8c3c-0348f883d1c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.257285 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237fc16a-eb29-4279-8c3c-0348f883d1c4-kube-api-access-5gqjm" (OuterVolumeSpecName: "kube-api-access-5gqjm") pod "237fc16a-eb29-4279-8c3c-0348f883d1c4" (UID: "237fc16a-eb29-4279-8c3c-0348f883d1c4"). InnerVolumeSpecName "kube-api-access-5gqjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.279790 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "237fc16a-eb29-4279-8c3c-0348f883d1c4" (UID: "237fc16a-eb29-4279-8c3c-0348f883d1c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.308701 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "237fc16a-eb29-4279-8c3c-0348f883d1c4" (UID: "237fc16a-eb29-4279-8c3c-0348f883d1c4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.334215 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "237fc16a-eb29-4279-8c3c-0348f883d1c4" (UID: "237fc16a-eb29-4279-8c3c-0348f883d1c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.345350 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.345385 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237fc16a-eb29-4279-8c3c-0348f883d1c4-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.345394 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.345402 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.345411 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gqjm\" (UniqueName: \"kubernetes.io/projected/237fc16a-eb29-4279-8c3c-0348f883d1c4-kube-api-access-5gqjm\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.345423 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.372062 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-config-data" (OuterVolumeSpecName: "config-data") pod "237fc16a-eb29-4279-8c3c-0348f883d1c4" (UID: "237fc16a-eb29-4279-8c3c-0348f883d1c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.448456 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237fc16a-eb29-4279-8c3c-0348f883d1c4-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.664200 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237fc16a-eb29-4279-8c3c-0348f883d1c4","Type":"ContainerDied","Data":"a3aceaca98bc910e8b928a9da7eb5b6bf2dccb445ad55afc2ac332e14b81c0b1"} Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.664246 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.664269 4727 scope.go:117] "RemoveContainer" containerID="8be80e9fb1e4821a57c5b28b1ef61861b4e4f92514cdc7bc5ba6d57e36c18a1a" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.696700 4727 scope.go:117] "RemoveContainer" containerID="6eac0df4ef5b05d6ffac026929e55b8e44bdad184f2d2d2f0e84ba226e4d38aa" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.706011 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.715667 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.728894 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:22 crc kubenswrapper[4727]: E1210 15:02:22.729502 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="ceilometer-central-agent" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729524 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="ceilometer-central-agent" Dec 10 15:02:22 crc kubenswrapper[4727]: E1210 15:02:22.729540 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" containerName="dnsmasq-dns" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729548 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" containerName="dnsmasq-dns" Dec 10 15:02:22 crc kubenswrapper[4727]: E1210 15:02:22.729565 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="ceilometer-notification-agent" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729571 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="ceilometer-notification-agent" Dec 10 15:02:22 crc kubenswrapper[4727]: E1210 15:02:22.729589 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" containerName="init" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729595 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" containerName="init" Dec 10 15:02:22 crc kubenswrapper[4727]: E1210 15:02:22.729611 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="proxy-httpd" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729617 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="proxy-httpd" Dec 10 15:02:22 crc kubenswrapper[4727]: E1210 15:02:22.729630 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="sg-core" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729636 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="sg-core" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729868 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="proxy-httpd" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729893 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="ceilometer-central-agent" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729929 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" containerName="dnsmasq-dns" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729943 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="sg-core" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.729968 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" containerName="ceilometer-notification-agent" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.732325 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.741032 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.855022 4727 scope.go:117] "RemoveContainer" containerID="e98f516ef35d35ebcd9a7c7f55042856e7a962016cfb0fbeedffcfb5696ab522" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.856290 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.856346 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.857433 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.882119 4727 scope.go:117] "RemoveContainer" containerID="5ee6881012eba85e05ee61b284345499bf243279933486278dba4f4cca2e0aa5" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.960350 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.961017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.961094 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-scripts\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.961473 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ct5k\" (UniqueName: \"kubernetes.io/projected/727601cd-934c-4d0d-b32e-c66a80adbb9f-kube-api-access-6ct5k\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.961536 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.961612 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727601cd-934c-4d0d-b32e-c66a80adbb9f-run-httpd\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.961706 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727601cd-934c-4d0d-b32e-c66a80adbb9f-log-httpd\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:22 crc kubenswrapper[4727]: I1210 15:02:22.961957 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-config-data\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.063855 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-config-data\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.063939 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.064014 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.064037 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-scripts\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.064059 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ct5k\" (UniqueName: \"kubernetes.io/projected/727601cd-934c-4d0d-b32e-c66a80adbb9f-kube-api-access-6ct5k\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.064081 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.064104 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727601cd-934c-4d0d-b32e-c66a80adbb9f-run-httpd\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.064132 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727601cd-934c-4d0d-b32e-c66a80adbb9f-log-httpd\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.064625 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727601cd-934c-4d0d-b32e-c66a80adbb9f-log-httpd\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.065564 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727601cd-934c-4d0d-b32e-c66a80adbb9f-run-httpd\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.069418 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.070042 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-config-data\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.070466 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.070480 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-scripts\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.080491 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/727601cd-934c-4d0d-b32e-c66a80adbb9f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.082893 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ct5k\" (UniqueName: \"kubernetes.io/projected/727601cd-934c-4d0d-b32e-c66a80adbb9f-kube-api-access-6ct5k\") pod \"ceilometer-0\" (UID: \"727601cd-934c-4d0d-b32e-c66a80adbb9f\") " pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.190479 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.653997 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:23 crc kubenswrapper[4727]: E1210 15:02:23.681192 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:02:23 crc kubenswrapper[4727]: E1210 15:02:23.681251 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:02:23 crc kubenswrapper[4727]: E1210 15:02:23.681422 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:02:23 crc kubenswrapper[4727]: E1210 15:02:23.682977 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:02:23 crc kubenswrapper[4727]: I1210 15:02:23.685368 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727601cd-934c-4d0d-b32e-c66a80adbb9f","Type":"ContainerStarted","Data":"7932337668bd5b86dc52ab20fc576b0fca05f0e99e91904f56626c5283612f5d"} Dec 10 15:02:23 crc kubenswrapper[4727]: E1210 15:02:23.756276 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:02:23 crc kubenswrapper[4727]: E1210 15:02:23.756338 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:02:23 crc kubenswrapper[4727]: E1210 15:02:23.756744 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:02:24 crc kubenswrapper[4727]: I1210 15:02:24.098789 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5fd9b586ff-mmbg5" podUID="46dda6a1-0c7e-4ea5-9ec5-aab327344ace" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.218:5353: i/o timeout" Dec 10 15:02:24 crc kubenswrapper[4727]: I1210 15:02:24.599551 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237fc16a-eb29-4279-8c3c-0348f883d1c4" path="/var/lib/kubelet/pods/237fc16a-eb29-4279-8c3c-0348f883d1c4/volumes" Dec 10 15:02:25 crc kubenswrapper[4727]: I1210 15:02:25.726607 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727601cd-934c-4d0d-b32e-c66a80adbb9f","Type":"ContainerStarted","Data":"123a79a74c8c0194cb05e2f17854035c03f79271a2782217e1ad374e82689d0a"} Dec 10 15:02:26 crc kubenswrapper[4727]: I1210 15:02:26.740039 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727601cd-934c-4d0d-b32e-c66a80adbb9f","Type":"ContainerStarted","Data":"eb051fbefa9b1c67a8e92ebaba27c600e8a584ebf0bf13c7e97e4cbae8a0bba4"} Dec 10 15:02:27 crc kubenswrapper[4727]: E1210 15:02:27.555937 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:02:27 crc kubenswrapper[4727]: I1210 15:02:27.753836 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727601cd-934c-4d0d-b32e-c66a80adbb9f","Type":"ContainerStarted","Data":"e50df3c76a776c6dd58227a3d6c0a42bddc9ceac4d000166491782bb68f96650"} Dec 10 15:02:27 crc kubenswrapper[4727]: I1210 15:02:27.754125 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:02:27 crc kubenswrapper[4727]: E1210 15:02:27.755709 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:02:28 crc kubenswrapper[4727]: E1210 15:02:28.765689 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:02:29 crc kubenswrapper[4727]: I1210 15:02:29.244232 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f64749dc-q82js" Dec 10 15:02:29 crc kubenswrapper[4727]: I1210 15:02:29.373936 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-kbp24"] Dec 10 15:02:29 crc kubenswrapper[4727]: I1210 15:02:29.374310 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" podUID="6351abca-bdee-4f6a-aff3-8673ff834122" containerName="dnsmasq-dns" containerID="cri-o://bdec14fb6d043b1d1a978ea07e1629720c2b17fab043efd2fef81b548e75dcf3" gracePeriod=10 Dec 10 15:02:29 crc kubenswrapper[4727]: I1210 15:02:29.784048 4727 generic.go:334] "Generic (PLEG): container finished" podID="6351abca-bdee-4f6a-aff3-8673ff834122" containerID="bdec14fb6d043b1d1a978ea07e1629720c2b17fab043efd2fef81b548e75dcf3" exitCode=0 Dec 10 15:02:29 crc kubenswrapper[4727]: I1210 15:02:29.784110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" event={"ID":"6351abca-bdee-4f6a-aff3-8673ff834122","Type":"ContainerDied","Data":"bdec14fb6d043b1d1a978ea07e1629720c2b17fab043efd2fef81b548e75dcf3"} Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.030513 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.052692 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-swift-storage-0\") pod \"6351abca-bdee-4f6a-aff3-8673ff834122\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.052745 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-config\") pod \"6351abca-bdee-4f6a-aff3-8673ff834122\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.154564 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-nb\") pod \"6351abca-bdee-4f6a-aff3-8673ff834122\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.154891 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-sb\") pod \"6351abca-bdee-4f6a-aff3-8673ff834122\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.155014 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-openstack-edpm-ipam\") pod \"6351abca-bdee-4f6a-aff3-8673ff834122\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.155070 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2rsd\" (UniqueName: \"kubernetes.io/projected/6351abca-bdee-4f6a-aff3-8673ff834122-kube-api-access-g2rsd\") pod \"6351abca-bdee-4f6a-aff3-8673ff834122\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.155096 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-svc\") pod \"6351abca-bdee-4f6a-aff3-8673ff834122\" (UID: \"6351abca-bdee-4f6a-aff3-8673ff834122\") " Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.161126 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-config" (OuterVolumeSpecName: "config") pod "6351abca-bdee-4f6a-aff3-8673ff834122" (UID: "6351abca-bdee-4f6a-aff3-8673ff834122"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.168132 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6351abca-bdee-4f6a-aff3-8673ff834122-kube-api-access-g2rsd" (OuterVolumeSpecName: "kube-api-access-g2rsd") pod "6351abca-bdee-4f6a-aff3-8673ff834122" (UID: "6351abca-bdee-4f6a-aff3-8673ff834122"). InnerVolumeSpecName "kube-api-access-g2rsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.176390 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6351abca-bdee-4f6a-aff3-8673ff834122" (UID: "6351abca-bdee-4f6a-aff3-8673ff834122"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.239191 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6351abca-bdee-4f6a-aff3-8673ff834122" (UID: "6351abca-bdee-4f6a-aff3-8673ff834122"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.251980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6351abca-bdee-4f6a-aff3-8673ff834122" (UID: "6351abca-bdee-4f6a-aff3-8673ff834122"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.255549 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "6351abca-bdee-4f6a-aff3-8673ff834122" (UID: "6351abca-bdee-4f6a-aff3-8673ff834122"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.259599 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6351abca-bdee-4f6a-aff3-8673ff834122" (UID: "6351abca-bdee-4f6a-aff3-8673ff834122"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.259744 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.259773 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.259792 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2rsd\" (UniqueName: \"kubernetes.io/projected/6351abca-bdee-4f6a-aff3-8673ff834122-kube-api-access-g2rsd\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.259815 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.259832 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.259847 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.362732 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6351abca-bdee-4f6a-aff3-8673ff834122-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.799462 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" event={"ID":"6351abca-bdee-4f6a-aff3-8673ff834122","Type":"ContainerDied","Data":"21efce07c69ec6fd9bede8aec4ce6ed31aa5100e3dea0c56d8d936c071f0ccb5"} Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.799523 4727 scope.go:117] "RemoveContainer" containerID="bdec14fb6d043b1d1a978ea07e1629720c2b17fab043efd2fef81b548e75dcf3" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.799703 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-kbp24" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.831043 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-kbp24"] Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.836702 4727 scope.go:117] "RemoveContainer" containerID="794d04e11ac7de5ac8aaed9fd462c1f883eb2fb995e4a2857bd6190e87f69503" Dec 10 15:02:30 crc kubenswrapper[4727]: I1210 15:02:30.845242 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-kbp24"] Dec 10 15:02:32 crc kubenswrapper[4727]: I1210 15:02:32.595459 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6351abca-bdee-4f6a-aff3-8673ff834122" path="/var/lib/kubelet/pods/6351abca-bdee-4f6a-aff3-8673ff834122/volumes" Dec 10 15:02:33 crc kubenswrapper[4727]: I1210 15:02:33.563378 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:02:33 crc kubenswrapper[4727]: E1210 15:02:33.563986 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:02:36 crc kubenswrapper[4727]: E1210 15:02:36.578293 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:02:39 crc kubenswrapper[4727]: I1210 15:02:39.910178 4727 generic.go:334] "Generic (PLEG): container finished" podID="6bb6d576-35c9-4ce9-9fa0-6cef4f513739" containerID="85ebad7cb0172c7485b4ce0439aa844e53fe0cb0c013f1bcb41e3efe7c5af244" exitCode=0 Dec 10 15:02:39 crc kubenswrapper[4727]: I1210 15:02:39.910353 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bb6d576-35c9-4ce9-9fa0-6cef4f513739","Type":"ContainerDied","Data":"85ebad7cb0172c7485b4ce0439aa844e53fe0cb0c013f1bcb41e3efe7c5af244"} Dec 10 15:02:39 crc kubenswrapper[4727]: I1210 15:02:39.914743 4727 generic.go:334] "Generic (PLEG): container finished" podID="8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c" containerID="cacf76e1f5c8a5dd92da0a7d51a4990823582691d30eaf732b2964688835233c" exitCode=0 Dec 10 15:02:39 crc kubenswrapper[4727]: I1210 15:02:39.914834 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c","Type":"ContainerDied","Data":"cacf76e1f5c8a5dd92da0a7d51a4990823582691d30eaf732b2964688835233c"} Dec 10 15:02:40 crc kubenswrapper[4727]: I1210 15:02:40.929743 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bb6d576-35c9-4ce9-9fa0-6cef4f513739","Type":"ContainerStarted","Data":"80fee2019e307780c5e6953a750190c4caa3404fc5d7bb5e7e5aa929c3e890d9"} Dec 10 15:02:40 crc kubenswrapper[4727]: I1210 15:02:40.930265 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:02:40 crc kubenswrapper[4727]: I1210 15:02:40.933026 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c","Type":"ContainerStarted","Data":"415cc4e83206471e8582f2b6c033879691e0a8bb645a945c5f613ec1e1630460"} Dec 10 15:02:40 crc kubenswrapper[4727]: I1210 15:02:40.933317 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 10 15:02:41 crc kubenswrapper[4727]: I1210 15:02:41.008409 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.008386973 podStartE2EDuration="38.008386973s" podCreationTimestamp="2025-12-10 15:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:41.001744253 +0000 UTC m=+1865.196518795" watchObservedRunningTime="2025-12-10 15:02:41.008386973 +0000 UTC m=+1865.203161525" Dec 10 15:02:41 crc kubenswrapper[4727]: I1210 15:02:41.047222 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.047197621 podStartE2EDuration="38.047197621s" podCreationTimestamp="2025-12-10 15:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:41.043142553 +0000 UTC m=+1865.237917095" watchObservedRunningTime="2025-12-10 15:02:41.047197621 +0000 UTC m=+1865.241972163" Dec 10 15:02:44 crc kubenswrapper[4727]: I1210 15:02:44.563166 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:02:44 crc kubenswrapper[4727]: E1210 15:02:44.564208 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:02:44 crc kubenswrapper[4727]: I1210 15:02:44.580153 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 15:02:44 crc kubenswrapper[4727]: E1210 15:02:44.689059 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:02:44 crc kubenswrapper[4727]: E1210 15:02:44.689142 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:02:44 crc kubenswrapper[4727]: E1210 15:02:44.689316 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:02:44 crc kubenswrapper[4727]: E1210 15:02:44.690515 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:02:44 crc kubenswrapper[4727]: E1210 15:02:44.976736 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.306169 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k"] Dec 10 15:02:46 crc kubenswrapper[4727]: E1210 15:02:46.307410 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6351abca-bdee-4f6a-aff3-8673ff834122" containerName="dnsmasq-dns" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.307431 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6351abca-bdee-4f6a-aff3-8673ff834122" containerName="dnsmasq-dns" Dec 10 15:02:46 crc kubenswrapper[4727]: E1210 15:02:46.307453 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6351abca-bdee-4f6a-aff3-8673ff834122" containerName="init" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.307462 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6351abca-bdee-4f6a-aff3-8673ff834122" containerName="init" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.308879 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6351abca-bdee-4f6a-aff3-8673ff834122" containerName="dnsmasq-dns" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.310403 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.314451 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.317686 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.318256 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.318879 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.367411 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.367589 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.367767 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs7zw\" (UniqueName: \"kubernetes.io/projected/49639fa2-d7d8-427b-ac47-9221c7fd68c3-kube-api-access-qs7zw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.367948 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.370928 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k"] Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.470469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.470600 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.470685 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs7zw\" (UniqueName: \"kubernetes.io/projected/49639fa2-d7d8-427b-ac47-9221c7fd68c3-kube-api-access-qs7zw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.470764 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.476855 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.477134 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.477278 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.491515 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs7zw\" (UniqueName: \"kubernetes.io/projected/49639fa2-d7d8-427b-ac47-9221c7fd68c3-kube-api-access-qs7zw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:46 crc kubenswrapper[4727]: I1210 15:02:46.639050 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:02:47 crc kubenswrapper[4727]: I1210 15:02:47.317308 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k"] Dec 10 15:02:47 crc kubenswrapper[4727]: W1210 15:02:47.319807 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49639fa2_d7d8_427b_ac47_9221c7fd68c3.slice/crio-cff791b74ce3e068f982ade5307cd3b0323f7a153e938e2eb29ab71e83d3166f WatchSource:0}: Error finding container cff791b74ce3e068f982ade5307cd3b0323f7a153e938e2eb29ab71e83d3166f: Status 404 returned error can't find the container with id cff791b74ce3e068f982ade5307cd3b0323f7a153e938e2eb29ab71e83d3166f Dec 10 15:02:48 crc kubenswrapper[4727]: I1210 15:02:48.015060 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" event={"ID":"49639fa2-d7d8-427b-ac47-9221c7fd68c3","Type":"ContainerStarted","Data":"cff791b74ce3e068f982ade5307cd3b0323f7a153e938e2eb29ab71e83d3166f"} Dec 10 15:02:49 crc kubenswrapper[4727]: E1210 15:02:49.566213 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:02:54 crc kubenswrapper[4727]: I1210 15:02:54.104181 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.231:5671: connect: connection refused" Dec 10 15:02:54 crc kubenswrapper[4727]: I1210 15:02:54.117667 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6bb6d576-35c9-4ce9-9fa0-6cef4f513739" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.232:5671: connect: connection refused" Dec 10 15:02:55 crc kubenswrapper[4727]: I1210 15:02:55.563940 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:02:55 crc kubenswrapper[4727]: E1210 15:02:55.711492 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:02:59 crc kubenswrapper[4727]: E1210 15:02:59.565756 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:03:00 crc kubenswrapper[4727]: I1210 15:03:00.170355 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" event={"ID":"49639fa2-d7d8-427b-ac47-9221c7fd68c3","Type":"ContainerStarted","Data":"8c2c792712f50e766c196373d410559f87b22656a0a562e0f7935302df9dc1e4"} Dec 10 15:03:00 crc kubenswrapper[4727]: I1210 15:03:00.194347 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" podStartSLOduration=2.657594078 podStartE2EDuration="14.194325358s" podCreationTimestamp="2025-12-10 15:02:46 +0000 UTC" firstStartedPulling="2025-12-10 15:02:47.323089143 +0000 UTC m=+1871.517863685" lastFinishedPulling="2025-12-10 15:02:58.859820423 +0000 UTC m=+1883.054594965" observedRunningTime="2025-12-10 15:03:00.187702218 +0000 UTC m=+1884.382476780" watchObservedRunningTime="2025-12-10 15:03:00.194325358 +0000 UTC m=+1884.389099890" Dec 10 15:03:02 crc kubenswrapper[4727]: E1210 15:03:02.565324 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:03:04 crc kubenswrapper[4727]: I1210 15:03:04.103187 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 10 15:03:04 crc kubenswrapper[4727]: I1210 15:03:04.117093 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:07 crc kubenswrapper[4727]: I1210 15:03:07.563405 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:03:07 crc kubenswrapper[4727]: E1210 15:03:07.564263 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:03:12 crc kubenswrapper[4727]: I1210 15:03:12.421046 4727 generic.go:334] "Generic (PLEG): container finished" podID="49639fa2-d7d8-427b-ac47-9221c7fd68c3" containerID="8c2c792712f50e766c196373d410559f87b22656a0a562e0f7935302df9dc1e4" exitCode=0 Dec 10 15:03:12 crc kubenswrapper[4727]: I1210 15:03:12.421124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" event={"ID":"49639fa2-d7d8-427b-ac47-9221c7fd68c3","Type":"ContainerDied","Data":"8c2c792712f50e766c196373d410559f87b22656a0a562e0f7935302df9dc1e4"} Dec 10 15:03:13 crc kubenswrapper[4727]: E1210 15:03:13.727898 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:03:13 crc kubenswrapper[4727]: E1210 15:03:13.728081 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:03:13 crc kubenswrapper[4727]: E1210 15:03:13.728251 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:03:13 crc kubenswrapper[4727]: E1210 15:03:13.729487 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.013401 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.058120 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-repo-setup-combined-ca-bundle\") pod \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.058247 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs7zw\" (UniqueName: \"kubernetes.io/projected/49639fa2-d7d8-427b-ac47-9221c7fd68c3-kube-api-access-qs7zw\") pod \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.058356 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-inventory\") pod \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.058462 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-ssh-key\") pod \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\" (UID: \"49639fa2-d7d8-427b-ac47-9221c7fd68c3\") " Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.066626 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "49639fa2-d7d8-427b-ac47-9221c7fd68c3" (UID: "49639fa2-d7d8-427b-ac47-9221c7fd68c3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.067068 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49639fa2-d7d8-427b-ac47-9221c7fd68c3-kube-api-access-qs7zw" (OuterVolumeSpecName: "kube-api-access-qs7zw") pod "49639fa2-d7d8-427b-ac47-9221c7fd68c3" (UID: "49639fa2-d7d8-427b-ac47-9221c7fd68c3"). InnerVolumeSpecName "kube-api-access-qs7zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.094007 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "49639fa2-d7d8-427b-ac47-9221c7fd68c3" (UID: "49639fa2-d7d8-427b-ac47-9221c7fd68c3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.101293 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-inventory" (OuterVolumeSpecName: "inventory") pod "49639fa2-d7d8-427b-ac47-9221c7fd68c3" (UID: "49639fa2-d7d8-427b-ac47-9221c7fd68c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.160338 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.160380 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.160390 4727 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49639fa2-d7d8-427b-ac47-9221c7fd68c3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.160404 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs7zw\" (UniqueName: \"kubernetes.io/projected/49639fa2-d7d8-427b-ac47-9221c7fd68c3-kube-api-access-qs7zw\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.447128 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" event={"ID":"49639fa2-d7d8-427b-ac47-9221c7fd68c3","Type":"ContainerDied","Data":"cff791b74ce3e068f982ade5307cd3b0323f7a153e938e2eb29ab71e83d3166f"} Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.447181 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff791b74ce3e068f982ade5307cd3b0323f7a153e938e2eb29ab71e83d3166f" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.447238 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.552365 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7"] Dec 10 15:03:14 crc kubenswrapper[4727]: E1210 15:03:14.552872 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49639fa2-d7d8-427b-ac47-9221c7fd68c3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.552889 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="49639fa2-d7d8-427b-ac47-9221c7fd68c3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.553159 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="49639fa2-d7d8-427b-ac47-9221c7fd68c3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.554434 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.557360 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.557447 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.558650 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.564577 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.587628 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7"] Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.669587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xpj\" (UniqueName: \"kubernetes.io/projected/b2b9c360-f933-4326-8b6f-5c1d869577c9-kube-api-access-d4xpj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x49j7\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.669652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x49j7\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.669958 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x49j7\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.771894 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x49j7\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.772078 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4xpj\" (UniqueName: \"kubernetes.io/projected/b2b9c360-f933-4326-8b6f-5c1d869577c9-kube-api-access-d4xpj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x49j7\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.772126 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x49j7\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.777646 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x49j7\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.780208 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x49j7\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.788649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4xpj\" (UniqueName: \"kubernetes.io/projected/b2b9c360-f933-4326-8b6f-5c1d869577c9-kube-api-access-d4xpj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x49j7\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:14 crc kubenswrapper[4727]: I1210 15:03:14.876937 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:15 crc kubenswrapper[4727]: W1210 15:03:15.466244 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2b9c360_f933_4326_8b6f_5c1d869577c9.slice/crio-ae5a7be7ac8f3a8d166ce6e360d34177f36806c0495a829498e8be2233703488 WatchSource:0}: Error finding container ae5a7be7ac8f3a8d166ce6e360d34177f36806c0495a829498e8be2233703488: Status 404 returned error can't find the container with id ae5a7be7ac8f3a8d166ce6e360d34177f36806c0495a829498e8be2233703488 Dec 10 15:03:15 crc kubenswrapper[4727]: I1210 15:03:15.468763 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7"] Dec 10 15:03:16 crc kubenswrapper[4727]: I1210 15:03:16.480090 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" event={"ID":"b2b9c360-f933-4326-8b6f-5c1d869577c9","Type":"ContainerStarted","Data":"ae5a7be7ac8f3a8d166ce6e360d34177f36806c0495a829498e8be2233703488"} Dec 10 15:03:17 crc kubenswrapper[4727]: I1210 15:03:17.493203 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" event={"ID":"b2b9c360-f933-4326-8b6f-5c1d869577c9","Type":"ContainerStarted","Data":"967ef2a2d9abd8beb9dd4323d721562cb97586e5a1c12e675c2cc513fd90a769"} Dec 10 15:03:17 crc kubenswrapper[4727]: I1210 15:03:17.524709 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" podStartSLOduration=1.937482361 podStartE2EDuration="3.524679713s" podCreationTimestamp="2025-12-10 15:03:14 +0000 UTC" firstStartedPulling="2025-12-10 15:03:15.470870636 +0000 UTC m=+1899.665645178" lastFinishedPulling="2025-12-10 15:03:17.058067988 +0000 UTC m=+1901.252842530" observedRunningTime="2025-12-10 15:03:17.513168925 +0000 UTC m=+1901.707943487" watchObservedRunningTime="2025-12-10 15:03:17.524679713 +0000 UTC m=+1901.719454255" Dec 10 15:03:17 crc kubenswrapper[4727]: E1210 15:03:17.689654 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:03:17 crc kubenswrapper[4727]: E1210 15:03:17.689741 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:03:17 crc kubenswrapper[4727]: E1210 15:03:17.689969 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:03:17 crc kubenswrapper[4727]: E1210 15:03:17.691247 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:03:18 crc kubenswrapper[4727]: I1210 15:03:18.687019 4727 scope.go:117] "RemoveContainer" containerID="9a0cfea999642f666c7f3840c0d1ad1186ef70157a8110ed8ee9d0a6a56acdd7" Dec 10 15:03:20 crc kubenswrapper[4727]: I1210 15:03:20.547286 4727 generic.go:334] "Generic (PLEG): container finished" podID="b2b9c360-f933-4326-8b6f-5c1d869577c9" containerID="967ef2a2d9abd8beb9dd4323d721562cb97586e5a1c12e675c2cc513fd90a769" exitCode=0 Dec 10 15:03:20 crc kubenswrapper[4727]: I1210 15:03:20.547383 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" event={"ID":"b2b9c360-f933-4326-8b6f-5c1d869577c9","Type":"ContainerDied","Data":"967ef2a2d9abd8beb9dd4323d721562cb97586e5a1c12e675c2cc513fd90a769"} Dec 10 15:03:20 crc kubenswrapper[4727]: I1210 15:03:20.564309 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:03:20 crc kubenswrapper[4727]: E1210 15:03:20.564674 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.136384 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.289598 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-inventory\") pod \"b2b9c360-f933-4326-8b6f-5c1d869577c9\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.289835 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4xpj\" (UniqueName: \"kubernetes.io/projected/b2b9c360-f933-4326-8b6f-5c1d869577c9-kube-api-access-d4xpj\") pod \"b2b9c360-f933-4326-8b6f-5c1d869577c9\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.289881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-ssh-key\") pod \"b2b9c360-f933-4326-8b6f-5c1d869577c9\" (UID: \"b2b9c360-f933-4326-8b6f-5c1d869577c9\") " Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.297292 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2b9c360-f933-4326-8b6f-5c1d869577c9-kube-api-access-d4xpj" (OuterVolumeSpecName: "kube-api-access-d4xpj") pod "b2b9c360-f933-4326-8b6f-5c1d869577c9" (UID: "b2b9c360-f933-4326-8b6f-5c1d869577c9"). InnerVolumeSpecName "kube-api-access-d4xpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.328986 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b2b9c360-f933-4326-8b6f-5c1d869577c9" (UID: "b2b9c360-f933-4326-8b6f-5c1d869577c9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.338338 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-inventory" (OuterVolumeSpecName: "inventory") pod "b2b9c360-f933-4326-8b6f-5c1d869577c9" (UID: "b2b9c360-f933-4326-8b6f-5c1d869577c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.392767 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4xpj\" (UniqueName: \"kubernetes.io/projected/b2b9c360-f933-4326-8b6f-5c1d869577c9-kube-api-access-d4xpj\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.392807 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.392819 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2b9c360-f933-4326-8b6f-5c1d869577c9-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.576555 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.577583 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x49j7" event={"ID":"b2b9c360-f933-4326-8b6f-5c1d869577c9","Type":"ContainerDied","Data":"ae5a7be7ac8f3a8d166ce6e360d34177f36806c0495a829498e8be2233703488"} Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.577642 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae5a7be7ac8f3a8d166ce6e360d34177f36806c0495a829498e8be2233703488" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.664226 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll"] Dec 10 15:03:22 crc kubenswrapper[4727]: E1210 15:03:22.664891 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b9c360-f933-4326-8b6f-5c1d869577c9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.664938 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b9c360-f933-4326-8b6f-5c1d869577c9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.665311 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2b9c360-f933-4326-8b6f-5c1d869577c9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.666502 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.670415 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.670454 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.672248 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.673211 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.680496 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll"] Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.909380 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkc4j\" (UniqueName: \"kubernetes.io/projected/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-kube-api-access-kkc4j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.909493 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.909537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:22 crc kubenswrapper[4727]: I1210 15:03:22.909594 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.011545 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkc4j\" (UniqueName: \"kubernetes.io/projected/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-kube-api-access-kkc4j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.011613 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.011657 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.011690 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.016771 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.016893 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.017578 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.029162 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkc4j\" (UniqueName: \"kubernetes.io/projected/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-kube-api-access-kkc4j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.288655 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:03:23 crc kubenswrapper[4727]: I1210 15:03:23.894323 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll"] Dec 10 15:03:24 crc kubenswrapper[4727]: I1210 15:03:24.608864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" event={"ID":"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2","Type":"ContainerStarted","Data":"ecc7937698ab550205f208a2056e7faf423128f02210f260fc3d8d7e0ed332db"} Dec 10 15:03:25 crc kubenswrapper[4727]: E1210 15:03:25.565594 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:03:26 crc kubenswrapper[4727]: I1210 15:03:26.632220 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" event={"ID":"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2","Type":"ContainerStarted","Data":"4eb900fe47df334d871ea7fbbd5c18119e72d684515dbebcf3c3b1fd06c90bae"} Dec 10 15:03:26 crc kubenswrapper[4727]: I1210 15:03:26.658920 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" podStartSLOduration=2.771998717 podStartE2EDuration="4.658868149s" podCreationTimestamp="2025-12-10 15:03:22 +0000 UTC" firstStartedPulling="2025-12-10 15:03:23.896927293 +0000 UTC m=+1908.091701835" lastFinishedPulling="2025-12-10 15:03:25.783796725 +0000 UTC m=+1909.978571267" observedRunningTime="2025-12-10 15:03:26.649340629 +0000 UTC m=+1910.844115171" watchObservedRunningTime="2025-12-10 15:03:26.658868149 +0000 UTC m=+1910.853642691" Dec 10 15:03:28 crc kubenswrapper[4727]: E1210 15:03:28.565522 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:03:31 crc kubenswrapper[4727]: I1210 15:03:31.563523 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:03:31 crc kubenswrapper[4727]: E1210 15:03:31.564159 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:03:39 crc kubenswrapper[4727]: E1210 15:03:39.566275 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:03:43 crc kubenswrapper[4727]: I1210 15:03:43.563955 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:03:43 crc kubenswrapper[4727]: E1210 15:03:43.564511 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:03:43 crc kubenswrapper[4727]: E1210 15:03:43.565015 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:03:53 crc kubenswrapper[4727]: E1210 15:03:53.566198 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:03:54 crc kubenswrapper[4727]: I1210 15:03:54.563662 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:03:54 crc kubenswrapper[4727]: E1210 15:03:54.564398 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:03:54 crc kubenswrapper[4727]: E1210 15:03:54.565614 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:04:06 crc kubenswrapper[4727]: I1210 15:04:06.571297 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:04:06 crc kubenswrapper[4727]: E1210 15:04:06.571967 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:04:08 crc kubenswrapper[4727]: E1210 15:04:08.565645 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:04:08 crc kubenswrapper[4727]: E1210 15:04:08.681406 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:04:08 crc kubenswrapper[4727]: E1210 15:04:08.681656 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:04:08 crc kubenswrapper[4727]: E1210 15:04:08.681815 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:04:08 crc kubenswrapper[4727]: E1210 15:04:08.682969 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:04:18 crc kubenswrapper[4727]: I1210 15:04:18.564389 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:04:18 crc kubenswrapper[4727]: E1210 15:04:18.565508 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:04:18 crc kubenswrapper[4727]: I1210 15:04:18.853289 4727 scope.go:117] "RemoveContainer" containerID="b12e9660bf8585f219d40f7f23ec3334f7caba717c11c222a07e84a444bfeeff" Dec 10 15:04:18 crc kubenswrapper[4727]: I1210 15:04:18.886267 4727 scope.go:117] "RemoveContainer" containerID="1b22da9a800bedeed241c7d8a4f2abf2e40a1b9db07f6532df08d01a37489cef" Dec 10 15:04:18 crc kubenswrapper[4727]: I1210 15:04:18.910419 4727 scope.go:117] "RemoveContainer" containerID="abcba7042fd38da9aa60a498a6143a2b721ea6adfdc6cebc70454788e93f9427" Dec 10 15:04:18 crc kubenswrapper[4727]: I1210 15:04:18.932918 4727 scope.go:117] "RemoveContainer" containerID="c48bbe9306787ff6d1c66afdab3d41634499974ffd7ada9f8ba57b35bb5c2678" Dec 10 15:04:18 crc kubenswrapper[4727]: I1210 15:04:18.960533 4727 scope.go:117] "RemoveContainer" containerID="629cf608530b2484323c36369fbc50bcd816b4833942e8e4535ce7e7bdae36c1" Dec 10 15:04:19 crc kubenswrapper[4727]: I1210 15:04:19.043360 4727 scope.go:117] "RemoveContainer" containerID="f2b84b2705e5bf76ab581f55589f15eccc5771d8078211ae5ff2838b0665a04e" Dec 10 15:04:19 crc kubenswrapper[4727]: I1210 15:04:19.077928 4727 scope.go:117] "RemoveContainer" containerID="9d334b6d5d8840f3dfbbda50d5a923ea73180232ef05d9b782dc164dc66aa80f" Dec 10 15:04:19 crc kubenswrapper[4727]: I1210 15:04:19.105270 4727 scope.go:117] "RemoveContainer" containerID="6b78cea1f98c9453122264b36a0b51ee3f9ec92d00425150a69df34092154c8c" Dec 10 15:04:19 crc kubenswrapper[4727]: I1210 15:04:19.129585 4727 scope.go:117] "RemoveContainer" containerID="441d56117116acd6aaff1cf86c763a2812fe8795fbe5b9ceebbe9962cc222af4" Dec 10 15:04:20 crc kubenswrapper[4727]: E1210 15:04:20.565389 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:04:23 crc kubenswrapper[4727]: E1210 15:04:23.565229 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:04:32 crc kubenswrapper[4727]: E1210 15:04:32.566596 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:04:33 crc kubenswrapper[4727]: I1210 15:04:33.563603 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:04:33 crc kubenswrapper[4727]: E1210 15:04:33.564109 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:04:38 crc kubenswrapper[4727]: E1210 15:04:38.717882 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:04:38 crc kubenswrapper[4727]: E1210 15:04:38.718545 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:04:38 crc kubenswrapper[4727]: E1210 15:04:38.718744 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:04:38 crc kubenswrapper[4727]: E1210 15:04:38.722959 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:04:44 crc kubenswrapper[4727]: I1210 15:04:44.569093 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:04:44 crc kubenswrapper[4727]: E1210 15:04:44.570086 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:04:45 crc kubenswrapper[4727]: E1210 15:04:45.567072 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.773952 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9kxrj"] Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.777598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.790671 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9kxrj"] Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.859808 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-catalog-content\") pod \"community-operators-9kxrj\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.860131 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-utilities\") pod \"community-operators-9kxrj\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.860289 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtvk7\" (UniqueName: \"kubernetes.io/projected/0e2d264c-1da8-4554-be4e-15514139877c-kube-api-access-mtvk7\") pod \"community-operators-9kxrj\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.962502 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-catalog-content\") pod \"community-operators-9kxrj\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.962614 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-utilities\") pod \"community-operators-9kxrj\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.962668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtvk7\" (UniqueName: \"kubernetes.io/projected/0e2d264c-1da8-4554-be4e-15514139877c-kube-api-access-mtvk7\") pod \"community-operators-9kxrj\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.963510 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-catalog-content\") pod \"community-operators-9kxrj\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.963752 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-utilities\") pod \"community-operators-9kxrj\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:52 crc kubenswrapper[4727]: I1210 15:04:52.982083 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtvk7\" (UniqueName: \"kubernetes.io/projected/0e2d264c-1da8-4554-be4e-15514139877c-kube-api-access-mtvk7\") pod \"community-operators-9kxrj\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:53 crc kubenswrapper[4727]: I1210 15:04:53.100688 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:04:53 crc kubenswrapper[4727]: E1210 15:04:53.565150 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:04:53 crc kubenswrapper[4727]: I1210 15:04:53.635423 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9kxrj"] Dec 10 15:04:53 crc kubenswrapper[4727]: I1210 15:04:53.798094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kxrj" event={"ID":"0e2d264c-1da8-4554-be4e-15514139877c","Type":"ContainerStarted","Data":"fe8efb9c8f3ddaf79e3e6fb3d15eaa74ad0c0ab557308d489452172a772856f9"} Dec 10 15:04:54 crc kubenswrapper[4727]: I1210 15:04:54.810227 4727 generic.go:334] "Generic (PLEG): container finished" podID="0e2d264c-1da8-4554-be4e-15514139877c" containerID="b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e" exitCode=0 Dec 10 15:04:54 crc kubenswrapper[4727]: I1210 15:04:54.810393 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kxrj" event={"ID":"0e2d264c-1da8-4554-be4e-15514139877c","Type":"ContainerDied","Data":"b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e"} Dec 10 15:04:56 crc kubenswrapper[4727]: I1210 15:04:56.837024 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kxrj" event={"ID":"0e2d264c-1da8-4554-be4e-15514139877c","Type":"ContainerStarted","Data":"86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a"} Dec 10 15:04:57 crc kubenswrapper[4727]: E1210 15:04:57.585386 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:04:58 crc kubenswrapper[4727]: I1210 15:04:58.563659 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:04:58 crc kubenswrapper[4727]: E1210 15:04:58.564001 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:05:00 crc kubenswrapper[4727]: I1210 15:05:00.880128 4727 generic.go:334] "Generic (PLEG): container finished" podID="0e2d264c-1da8-4554-be4e-15514139877c" containerID="86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a" exitCode=0 Dec 10 15:05:00 crc kubenswrapper[4727]: I1210 15:05:00.880208 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kxrj" event={"ID":"0e2d264c-1da8-4554-be4e-15514139877c","Type":"ContainerDied","Data":"86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a"} Dec 10 15:05:02 crc kubenswrapper[4727]: I1210 15:05:02.908346 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kxrj" event={"ID":"0e2d264c-1da8-4554-be4e-15514139877c","Type":"ContainerStarted","Data":"6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2"} Dec 10 15:05:02 crc kubenswrapper[4727]: I1210 15:05:02.939528 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9kxrj" podStartSLOduration=4.000383378 podStartE2EDuration="10.939509785s" podCreationTimestamp="2025-12-10 15:04:52 +0000 UTC" firstStartedPulling="2025-12-10 15:04:54.813168263 +0000 UTC m=+1999.007942805" lastFinishedPulling="2025-12-10 15:05:01.75229467 +0000 UTC m=+2005.947069212" observedRunningTime="2025-12-10 15:05:02.934895519 +0000 UTC m=+2007.129670071" watchObservedRunningTime="2025-12-10 15:05:02.939509785 +0000 UTC m=+2007.134284327" Dec 10 15:05:03 crc kubenswrapper[4727]: I1210 15:05:03.101267 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:05:03 crc kubenswrapper[4727]: I1210 15:05:03.101331 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:05:04 crc kubenswrapper[4727]: I1210 15:05:04.155646 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9kxrj" podUID="0e2d264c-1da8-4554-be4e-15514139877c" containerName="registry-server" probeResult="failure" output=< Dec 10 15:05:04 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 15:05:04 crc kubenswrapper[4727]: > Dec 10 15:05:05 crc kubenswrapper[4727]: E1210 15:05:05.565838 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:05:09 crc kubenswrapper[4727]: E1210 15:05:09.565653 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:05:12 crc kubenswrapper[4727]: I1210 15:05:12.563795 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:05:12 crc kubenswrapper[4727]: E1210 15:05:12.564523 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:05:13 crc kubenswrapper[4727]: I1210 15:05:13.151879 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:05:13 crc kubenswrapper[4727]: I1210 15:05:13.207595 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:05:13 crc kubenswrapper[4727]: I1210 15:05:13.394082 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9kxrj"] Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.043256 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9kxrj" podUID="0e2d264c-1da8-4554-be4e-15514139877c" containerName="registry-server" containerID="cri-o://6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2" gracePeriod=2 Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.632833 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.762921 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-utilities\") pod \"0e2d264c-1da8-4554-be4e-15514139877c\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.763184 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-catalog-content\") pod \"0e2d264c-1da8-4554-be4e-15514139877c\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.763235 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtvk7\" (UniqueName: \"kubernetes.io/projected/0e2d264c-1da8-4554-be4e-15514139877c-kube-api-access-mtvk7\") pod \"0e2d264c-1da8-4554-be4e-15514139877c\" (UID: \"0e2d264c-1da8-4554-be4e-15514139877c\") " Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.764079 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-utilities" (OuterVolumeSpecName: "utilities") pod "0e2d264c-1da8-4554-be4e-15514139877c" (UID: "0e2d264c-1da8-4554-be4e-15514139877c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.764202 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.775150 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2d264c-1da8-4554-be4e-15514139877c-kube-api-access-mtvk7" (OuterVolumeSpecName: "kube-api-access-mtvk7") pod "0e2d264c-1da8-4554-be4e-15514139877c" (UID: "0e2d264c-1da8-4554-be4e-15514139877c"). InnerVolumeSpecName "kube-api-access-mtvk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.865183 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e2d264c-1da8-4554-be4e-15514139877c" (UID: "0e2d264c-1da8-4554-be4e-15514139877c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.866439 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2d264c-1da8-4554-be4e-15514139877c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:15 crc kubenswrapper[4727]: I1210 15:05:15.866464 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtvk7\" (UniqueName: \"kubernetes.io/projected/0e2d264c-1da8-4554-be4e-15514139877c-kube-api-access-mtvk7\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.056153 4727 generic.go:334] "Generic (PLEG): container finished" podID="0e2d264c-1da8-4554-be4e-15514139877c" containerID="6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2" exitCode=0 Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.056198 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kxrj" event={"ID":"0e2d264c-1da8-4554-be4e-15514139877c","Type":"ContainerDied","Data":"6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2"} Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.056208 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kxrj" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.056245 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kxrj" event={"ID":"0e2d264c-1da8-4554-be4e-15514139877c","Type":"ContainerDied","Data":"fe8efb9c8f3ddaf79e3e6fb3d15eaa74ad0c0ab557308d489452172a772856f9"} Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.056265 4727 scope.go:117] "RemoveContainer" containerID="6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.085582 4727 scope.go:117] "RemoveContainer" containerID="86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.095129 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9kxrj"] Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.107056 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9kxrj"] Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.121007 4727 scope.go:117] "RemoveContainer" containerID="b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.172139 4727 scope.go:117] "RemoveContainer" containerID="6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2" Dec 10 15:05:16 crc kubenswrapper[4727]: E1210 15:05:16.172702 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2\": container with ID starting with 6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2 not found: ID does not exist" containerID="6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.172757 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2"} err="failed to get container status \"6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2\": rpc error: code = NotFound desc = could not find container \"6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2\": container with ID starting with 6b33510c4f8a1cc89f7a624314b63850374e5507999f7e47ea8bf3bbf1bec6a2 not found: ID does not exist" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.172791 4727 scope.go:117] "RemoveContainer" containerID="86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a" Dec 10 15:05:16 crc kubenswrapper[4727]: E1210 15:05:16.173197 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a\": container with ID starting with 86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a not found: ID does not exist" containerID="86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.173241 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a"} err="failed to get container status \"86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a\": rpc error: code = NotFound desc = could not find container \"86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a\": container with ID starting with 86139311fcd659e15e698338cd67ea3a7e30452317e12593d9439645dda96a4a not found: ID does not exist" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.173276 4727 scope.go:117] "RemoveContainer" containerID="b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e" Dec 10 15:05:16 crc kubenswrapper[4727]: E1210 15:05:16.173603 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e\": container with ID starting with b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e not found: ID does not exist" containerID="b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.173636 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e"} err="failed to get container status \"b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e\": rpc error: code = NotFound desc = could not find container \"b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e\": container with ID starting with b4901c031aad8da95b86cdd8d058f0972738c723388246b27720bd1b3980e33e not found: ID does not exist" Dec 10 15:05:16 crc kubenswrapper[4727]: I1210 15:05:16.580013 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e2d264c-1da8-4554-be4e-15514139877c" path="/var/lib/kubelet/pods/0e2d264c-1da8-4554-be4e-15514139877c/volumes" Dec 10 15:05:18 crc kubenswrapper[4727]: E1210 15:05:18.566350 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:05:23 crc kubenswrapper[4727]: E1210 15:05:23.579847 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:05:27 crc kubenswrapper[4727]: I1210 15:05:27.563985 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:05:27 crc kubenswrapper[4727]: E1210 15:05:27.564717 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:05:30 crc kubenswrapper[4727]: I1210 15:05:30.058767 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fds6f"] Dec 10 15:05:30 crc kubenswrapper[4727]: I1210 15:05:30.071310 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a77b-account-create-update-zhrh8"] Dec 10 15:05:30 crc kubenswrapper[4727]: I1210 15:05:30.081711 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fds6f"] Dec 10 15:05:30 crc kubenswrapper[4727]: I1210 15:05:30.092530 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a77b-account-create-update-zhrh8"] Dec 10 15:05:30 crc kubenswrapper[4727]: I1210 15:05:30.576737 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df0cafa-4dc3-445a-8107-1098c218c787" path="/var/lib/kubelet/pods/7df0cafa-4dc3-445a-8107-1098c218c787/volumes" Dec 10 15:05:30 crc kubenswrapper[4727]: I1210 15:05:30.578046 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6" path="/var/lib/kubelet/pods/9d9ff9d9-b83b-4eaa-a0f1-629cb04471d6/volumes" Dec 10 15:05:32 crc kubenswrapper[4727]: E1210 15:05:32.566395 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:05:37 crc kubenswrapper[4727]: E1210 15:05:37.676290 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:05:37 crc kubenswrapper[4727]: E1210 15:05:37.677265 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:05:37 crc kubenswrapper[4727]: E1210 15:05:37.677551 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:05:37 crc kubenswrapper[4727]: E1210 15:05:37.678729 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.040951 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-03a2-account-create-update-p7wsc"] Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.060352 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-03a2-account-create-update-p7wsc"] Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.071006 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ebc3-account-create-update-cr7fz"] Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.083412 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ztlsl"] Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.096778 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ztlsl"] Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.112131 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ebc3-account-create-update-cr7fz"] Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.563427 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.576522 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56bec11f-546b-44e3-9fbe-11468e08ebca" path="/var/lib/kubelet/pods/56bec11f-546b-44e3-9fbe-11468e08ebca/volumes" Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.577281 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577fad75-56b7-4ad0-89c9-b44f0c771ef7" path="/var/lib/kubelet/pods/577fad75-56b7-4ad0-89c9-b44f0c771ef7/volumes" Dec 10 15:05:38 crc kubenswrapper[4727]: I1210 15:05:38.577962 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b7b4d6-d45b-40ce-80db-772552dfa8e0" path="/var/lib/kubelet/pods/c8b7b4d6-d45b-40ce-80db-772552dfa8e0/volumes" Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.051241 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-05f6-account-create-update-48sht"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.075206 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fpmqv"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.087230 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6bf5-account-create-update-8pj4j"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.100330 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fpmqv"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.109780 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-05f6-account-create-update-48sht"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.121070 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6bf5-account-create-update-8pj4j"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.132540 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-0fdf-account-create-update-2tlkr"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.145568 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-n2gx8"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.158304 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-0fdf-account-create-update-2tlkr"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.177183 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-n2gx8"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.188022 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ndq58"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.197323 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-cv5tz"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.211356 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ndq58"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.229259 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-cv5tz"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.239580 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-224d-account-create-update-897d4"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.250473 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-224d-account-create-update-897d4"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.261687 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-b8j8z"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.312161 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-b8j8z"] Dec 10 15:05:39 crc kubenswrapper[4727]: I1210 15:05:39.671192 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"8b5f852b583437ec1b25475f241ec6d146ca628d16e1264f335956ab2c69ec76"} Dec 10 15:05:40 crc kubenswrapper[4727]: I1210 15:05:40.577220 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402116b0-924d-4dec-aece-9da581a05b83" path="/var/lib/kubelet/pods/402116b0-924d-4dec-aece-9da581a05b83/volumes" Dec 10 15:05:40 crc kubenswrapper[4727]: I1210 15:05:40.581268 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506cfe4b-7b71-418d-bba3-0e534380eea8" path="/var/lib/kubelet/pods/506cfe4b-7b71-418d-bba3-0e534380eea8/volumes" Dec 10 15:05:40 crc kubenswrapper[4727]: I1210 15:05:40.583122 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c0327a-5640-4478-8641-5e495745e5cd" path="/var/lib/kubelet/pods/51c0327a-5640-4478-8641-5e495745e5cd/volumes" Dec 10 15:05:40 crc kubenswrapper[4727]: I1210 15:05:40.585007 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c" path="/var/lib/kubelet/pods/8522f4ac-d6ee-43b0-b3eb-ad9eb8d8779c/volumes" Dec 10 15:05:40 crc kubenswrapper[4727]: I1210 15:05:40.588157 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945edb14-70e4-40c5-a208-f14443517e42" path="/var/lib/kubelet/pods/945edb14-70e4-40c5-a208-f14443517e42/volumes" Dec 10 15:05:40 crc kubenswrapper[4727]: I1210 15:05:40.596302 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c737fc-a5d8-4dde-9040-d6ff30a37557" path="/var/lib/kubelet/pods/b6c737fc-a5d8-4dde-9040-d6ff30a37557/volumes" Dec 10 15:05:40 crc kubenswrapper[4727]: I1210 15:05:40.597077 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8712c0b-547f-4dda-83f9-bc4d5b9063e8" path="/var/lib/kubelet/pods/b8712c0b-547f-4dda-83f9-bc4d5b9063e8/volumes" Dec 10 15:05:40 crc kubenswrapper[4727]: I1210 15:05:40.597734 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9724e12-6c3d-48c9-b783-13520354dda1" path="/var/lib/kubelet/pods/e9724e12-6c3d-48c9-b783-13520354dda1/volumes" Dec 10 15:05:40 crc kubenswrapper[4727]: I1210 15:05:40.598875 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a907b9-0bc8-44e2-b3cb-3a1e867975ec" path="/var/lib/kubelet/pods/f4a907b9-0bc8-44e2-b3cb-3a1e867975ec/volumes" Dec 10 15:05:45 crc kubenswrapper[4727]: E1210 15:05:45.565799 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:05:49 crc kubenswrapper[4727]: E1210 15:05:49.565873 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:05:57 crc kubenswrapper[4727]: E1210 15:05:57.568373 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:06:02 crc kubenswrapper[4727]: E1210 15:06:02.566629 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:06:11 crc kubenswrapper[4727]: E1210 15:06:11.565933 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:06:13 crc kubenswrapper[4727]: E1210 15:06:13.566208 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:06:17 crc kubenswrapper[4727]: I1210 15:06:17.047067 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5phw9"] Dec 10 15:06:17 crc kubenswrapper[4727]: I1210 15:06:17.058335 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5phw9"] Dec 10 15:06:18 crc kubenswrapper[4727]: I1210 15:06:18.577546 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a282a3-fd71-4fd7-9a0b-7871a930affc" path="/var/lib/kubelet/pods/70a282a3-fd71-4fd7-9a0b-7871a930affc/volumes" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.307444 4727 scope.go:117] "RemoveContainer" containerID="6353a233e6b3226d18f3d7528d56f27b2a998ee22361c4eb87105cd934ad33d6" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.354561 4727 scope.go:117] "RemoveContainer" containerID="2333e2aa523fe595ec92cd3183b859af5d30c52b84691167903304a47b69e2b3" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.386525 4727 scope.go:117] "RemoveContainer" containerID="6f5d4d38ed8c6602ea941120bef89509cc0ff1d6ccf9d554ff2d6e31377aeac4" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.411220 4727 scope.go:117] "RemoveContainer" containerID="8bda4e32ae9e590de94815bd90216148a2ecbfacfee073d9f1e1d414ae015ffc" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.478983 4727 scope.go:117] "RemoveContainer" containerID="c1cc9183af7b8b0308207008c8fd76447cbef1da21c4290678387239c3036349" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.547954 4727 scope.go:117] "RemoveContainer" containerID="b8f89e796222aa3865301b7fe9d57831de93d10e06621cbfbad281b496d58c6c" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.600530 4727 scope.go:117] "RemoveContainer" containerID="d2b2f1777571005041008f97868685df5fa6514438ba93a1d36545c1bfbc1af5" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.649885 4727 scope.go:117] "RemoveContainer" containerID="5b829f2452808ffc4debe63cdbd3465c47877a424d1cc266c4d973c24511537d" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.707851 4727 scope.go:117] "RemoveContainer" containerID="c53be6facd652b15459f50cf4387ec1e394253f718fb7c436e5b98f93f545e4d" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.731741 4727 scope.go:117] "RemoveContainer" containerID="976415abce4149cef2f2bf6c4d947647a9154404f4b53ee7f2b40b5bfcef535e" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.763144 4727 scope.go:117] "RemoveContainer" containerID="edea96d83b8d3e45d3a3b78f2ba974943b432df3fa330fb1359e7b18c5ee5ca7" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.792658 4727 scope.go:117] "RemoveContainer" containerID="438b971e05328a4f090a89b9531eca39efe19e1ae34b1cc538d5c87b3ed17266" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.822777 4727 scope.go:117] "RemoveContainer" containerID="3769165c241d7118262dd1e3a26c27407f49bf97a5becf0a86cb525a1bbb4249" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.853656 4727 scope.go:117] "RemoveContainer" containerID="63881adbe210ada6fb903b19a48b9c7cadb46d53decce91a0c1b6a9f1fab6f1a" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.875155 4727 scope.go:117] "RemoveContainer" containerID="afaf487934120d553e625c542c6f4868fa3756969ffddaaa44150ac64124da74" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.897646 4727 scope.go:117] "RemoveContainer" containerID="713cb248995a640c1cc2d8e6138dec9ce2a9eb9ee51c25b790b301a2fd5dea66" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.920702 4727 scope.go:117] "RemoveContainer" containerID="82743c870535a32c398de2204838e3deb23b6440f8887bfda6cd2a86c53c07f2" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.944223 4727 scope.go:117] "RemoveContainer" containerID="3d78f4eb0703ce61821b347174f137572bf8b16b010926faae7bef03669a90d6" Dec 10 15:06:19 crc kubenswrapper[4727]: I1210 15:06:19.977032 4727 scope.go:117] "RemoveContainer" containerID="d73999af619850b6c8a9f166583e763b52f1612f86dbbe752bca052eb1da1c5f" Dec 10 15:06:20 crc kubenswrapper[4727]: I1210 15:06:20.007890 4727 scope.go:117] "RemoveContainer" containerID="526ee02f6b6d318bc473490ef5c69590b95e08d6d691aff202d55b00ceea77af" Dec 10 15:06:20 crc kubenswrapper[4727]: I1210 15:06:20.032104 4727 scope.go:117] "RemoveContainer" containerID="c98a3d2ef185b37b4effba5cb660dc3825850534f0667eda304a4bd5bb8f49be" Dec 10 15:06:20 crc kubenswrapper[4727]: I1210 15:06:20.058527 4727 scope.go:117] "RemoveContainer" containerID="5eaacda7a4edd18755cb2532a9120e0fdb6e76b5671180b3d1a0ce3bc2d7d124" Dec 10 15:06:23 crc kubenswrapper[4727]: E1210 15:06:23.565316 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:06:26 crc kubenswrapper[4727]: E1210 15:06:26.574196 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:06:38 crc kubenswrapper[4727]: E1210 15:06:38.566558 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:06:38 crc kubenswrapper[4727]: E1210 15:06:38.567845 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:06:50 crc kubenswrapper[4727]: E1210 15:06:50.565726 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:06:50 crc kubenswrapper[4727]: E1210 15:06:50.566039 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:06:51 crc kubenswrapper[4727]: I1210 15:06:51.986117 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-944r6"] Dec 10 15:06:51 crc kubenswrapper[4727]: E1210 15:06:51.987634 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2d264c-1da8-4554-be4e-15514139877c" containerName="extract-content" Dec 10 15:06:51 crc kubenswrapper[4727]: I1210 15:06:51.987778 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2d264c-1da8-4554-be4e-15514139877c" containerName="extract-content" Dec 10 15:06:51 crc kubenswrapper[4727]: E1210 15:06:51.987896 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2d264c-1da8-4554-be4e-15514139877c" containerName="registry-server" Dec 10 15:06:51 crc kubenswrapper[4727]: I1210 15:06:51.987998 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2d264c-1da8-4554-be4e-15514139877c" containerName="registry-server" Dec 10 15:06:51 crc kubenswrapper[4727]: E1210 15:06:51.988097 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2d264c-1da8-4554-be4e-15514139877c" containerName="extract-utilities" Dec 10 15:06:51 crc kubenswrapper[4727]: I1210 15:06:51.988172 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2d264c-1da8-4554-be4e-15514139877c" containerName="extract-utilities" Dec 10 15:06:51 crc kubenswrapper[4727]: I1210 15:06:51.988489 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2d264c-1da8-4554-be4e-15514139877c" containerName="registry-server" Dec 10 15:06:51 crc kubenswrapper[4727]: I1210 15:06:51.991856 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.009303 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-944r6"] Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.184749 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-utilities\") pod \"redhat-operators-944r6\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.184829 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tb4z\" (UniqueName: \"kubernetes.io/projected/430198f1-837c-405e-a2cf-b909fe7fc6c4-kube-api-access-5tb4z\") pod \"redhat-operators-944r6\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.185330 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-catalog-content\") pod \"redhat-operators-944r6\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.287338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-catalog-content\") pod \"redhat-operators-944r6\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.287535 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-utilities\") pod \"redhat-operators-944r6\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.287592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tb4z\" (UniqueName: \"kubernetes.io/projected/430198f1-837c-405e-a2cf-b909fe7fc6c4-kube-api-access-5tb4z\") pod \"redhat-operators-944r6\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.287940 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-catalog-content\") pod \"redhat-operators-944r6\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.288084 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-utilities\") pod \"redhat-operators-944r6\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.312126 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tb4z\" (UniqueName: \"kubernetes.io/projected/430198f1-837c-405e-a2cf-b909fe7fc6c4-kube-api-access-5tb4z\") pod \"redhat-operators-944r6\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:52 crc kubenswrapper[4727]: I1210 15:06:52.320736 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:06:53 crc kubenswrapper[4727]: I1210 15:06:53.069420 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-944r6"] Dec 10 15:06:53 crc kubenswrapper[4727]: I1210 15:06:53.737516 4727 generic.go:334] "Generic (PLEG): container finished" podID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerID="9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4" exitCode=0 Dec 10 15:06:53 crc kubenswrapper[4727]: I1210 15:06:53.737634 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-944r6" event={"ID":"430198f1-837c-405e-a2cf-b909fe7fc6c4","Type":"ContainerDied","Data":"9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4"} Dec 10 15:06:53 crc kubenswrapper[4727]: I1210 15:06:53.737837 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-944r6" event={"ID":"430198f1-837c-405e-a2cf-b909fe7fc6c4","Type":"ContainerStarted","Data":"ddcb526c88dbe7c007196b86a35df3b063fdb37b440f467d6b2164a706dabf67"} Dec 10 15:06:53 crc kubenswrapper[4727]: I1210 15:06:53.740391 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:06:55 crc kubenswrapper[4727]: I1210 15:06:55.960456 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-944r6" event={"ID":"430198f1-837c-405e-a2cf-b909fe7fc6c4","Type":"ContainerStarted","Data":"766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90"} Dec 10 15:07:00 crc kubenswrapper[4727]: I1210 15:07:00.005284 4727 generic.go:334] "Generic (PLEG): container finished" podID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerID="766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90" exitCode=0 Dec 10 15:07:00 crc kubenswrapper[4727]: I1210 15:07:00.005356 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-944r6" event={"ID":"430198f1-837c-405e-a2cf-b909fe7fc6c4","Type":"ContainerDied","Data":"766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90"} Dec 10 15:07:01 crc kubenswrapper[4727]: I1210 15:07:01.035504 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-944r6" event={"ID":"430198f1-837c-405e-a2cf-b909fe7fc6c4","Type":"ContainerStarted","Data":"87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698"} Dec 10 15:07:01 crc kubenswrapper[4727]: I1210 15:07:01.060201 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-944r6" podStartSLOduration=3.110410118 podStartE2EDuration="10.060183355s" podCreationTimestamp="2025-12-10 15:06:51 +0000 UTC" firstStartedPulling="2025-12-10 15:06:53.73999674 +0000 UTC m=+2117.934771282" lastFinishedPulling="2025-12-10 15:07:00.689769977 +0000 UTC m=+2124.884544519" observedRunningTime="2025-12-10 15:07:01.055563219 +0000 UTC m=+2125.250337761" watchObservedRunningTime="2025-12-10 15:07:01.060183355 +0000 UTC m=+2125.254957897" Dec 10 15:07:01 crc kubenswrapper[4727]: E1210 15:07:01.564319 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:07:02 crc kubenswrapper[4727]: I1210 15:07:02.073001 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-zb5mz"] Dec 10 15:07:02 crc kubenswrapper[4727]: I1210 15:07:02.087802 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-zb5mz"] Dec 10 15:07:02 crc kubenswrapper[4727]: I1210 15:07:02.323262 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:07:02 crc kubenswrapper[4727]: I1210 15:07:02.323311 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:07:02 crc kubenswrapper[4727]: I1210 15:07:02.575116 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c3a0fe-e0f7-4f79-ae22-d143511424e9" path="/var/lib/kubelet/pods/20c3a0fe-e0f7-4f79-ae22-d143511424e9/volumes" Dec 10 15:07:03 crc kubenswrapper[4727]: I1210 15:07:03.056798 4727 generic.go:334] "Generic (PLEG): container finished" podID="bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2" containerID="4eb900fe47df334d871ea7fbbd5c18119e72d684515dbebcf3c3b1fd06c90bae" exitCode=0 Dec 10 15:07:03 crc kubenswrapper[4727]: I1210 15:07:03.057134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" event={"ID":"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2","Type":"ContainerDied","Data":"4eb900fe47df334d871ea7fbbd5c18119e72d684515dbebcf3c3b1fd06c90bae"} Dec 10 15:07:03 crc kubenswrapper[4727]: I1210 15:07:03.383051 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-944r6" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerName="registry-server" probeResult="failure" output=< Dec 10 15:07:03 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 15:07:03 crc kubenswrapper[4727]: > Dec 10 15:07:04 crc kubenswrapper[4727]: E1210 15:07:04.564855 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.637477 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.804644 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-bootstrap-combined-ca-bundle\") pod \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.804712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkc4j\" (UniqueName: \"kubernetes.io/projected/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-kube-api-access-kkc4j\") pod \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.804784 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-inventory\") pod \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.804958 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-ssh-key\") pod \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\" (UID: \"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2\") " Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.811669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2" (UID: "bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.812302 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-kube-api-access-kkc4j" (OuterVolumeSpecName: "kube-api-access-kkc4j") pod "bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2" (UID: "bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2"). InnerVolumeSpecName "kube-api-access-kkc4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.838928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2" (UID: "bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.844119 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-inventory" (OuterVolumeSpecName: "inventory") pod "bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2" (UID: "bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.908003 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.908044 4727 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.908057 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkc4j\" (UniqueName: \"kubernetes.io/projected/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-kube-api-access-kkc4j\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:04 crc kubenswrapper[4727]: I1210 15:07:04.908068 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.086957 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" event={"ID":"bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2","Type":"ContainerDied","Data":"ecc7937698ab550205f208a2056e7faf423128f02210f260fc3d8d7e0ed332db"} Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.087024 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc7937698ab550205f208a2056e7faf423128f02210f260fc3d8d7e0ed332db" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.087035 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.177438 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q"] Dec 10 15:07:05 crc kubenswrapper[4727]: E1210 15:07:05.178380 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.178408 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.178710 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.179867 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.186005 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.186411 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.186449 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.186538 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.191417 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q"] Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.320831 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq4mq\" (UniqueName: \"kubernetes.io/projected/7b7cce52-493a-4a84-a51a-768d6d40d69d-kube-api-access-dq4mq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z594q\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.320964 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z594q\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.321024 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z594q\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.422965 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq4mq\" (UniqueName: \"kubernetes.io/projected/7b7cce52-493a-4a84-a51a-768d6d40d69d-kube-api-access-dq4mq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z594q\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.423091 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z594q\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.423150 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z594q\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.429382 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z594q\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.429518 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z594q\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.444967 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq4mq\" (UniqueName: \"kubernetes.io/projected/7b7cce52-493a-4a84-a51a-768d6d40d69d-kube-api-access-dq4mq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z594q\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:05 crc kubenswrapper[4727]: I1210 15:07:05.504707 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:07:06 crc kubenswrapper[4727]: I1210 15:07:06.111775 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q"] Dec 10 15:07:07 crc kubenswrapper[4727]: I1210 15:07:07.113515 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" event={"ID":"7b7cce52-493a-4a84-a51a-768d6d40d69d","Type":"ContainerStarted","Data":"0b0d9211fb577bcad5c9b15f086fee04cf88d8ca192772cd6dbd62c12f4be297"} Dec 10 15:07:07 crc kubenswrapper[4727]: I1210 15:07:07.114056 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" event={"ID":"7b7cce52-493a-4a84-a51a-768d6d40d69d","Type":"ContainerStarted","Data":"41b69c4e7ca14b12a57bf4f0dcf5ef72057287a2b6b5f9c5746e93ca6ee8cb42"} Dec 10 15:07:07 crc kubenswrapper[4727]: I1210 15:07:07.138029 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" podStartSLOduration=1.641012608 podStartE2EDuration="2.138006562s" podCreationTimestamp="2025-12-10 15:07:05 +0000 UTC" firstStartedPulling="2025-12-10 15:07:06.11732902 +0000 UTC m=+2130.312103552" lastFinishedPulling="2025-12-10 15:07:06.614322964 +0000 UTC m=+2130.809097506" observedRunningTime="2025-12-10 15:07:07.131316244 +0000 UTC m=+2131.326090796" watchObservedRunningTime="2025-12-10 15:07:07.138006562 +0000 UTC m=+2131.332781104" Dec 10 15:07:12 crc kubenswrapper[4727]: I1210 15:07:12.384493 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:07:12 crc kubenswrapper[4727]: I1210 15:07:12.453002 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:07:12 crc kubenswrapper[4727]: E1210 15:07:12.567250 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:07:12 crc kubenswrapper[4727]: I1210 15:07:12.650936 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-944r6"] Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.193843 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-944r6" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerName="registry-server" containerID="cri-o://87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698" gracePeriod=2 Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.721273 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.821243 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-utilities\") pod \"430198f1-837c-405e-a2cf-b909fe7fc6c4\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.821385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tb4z\" (UniqueName: \"kubernetes.io/projected/430198f1-837c-405e-a2cf-b909fe7fc6c4-kube-api-access-5tb4z\") pod \"430198f1-837c-405e-a2cf-b909fe7fc6c4\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.821520 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-catalog-content\") pod \"430198f1-837c-405e-a2cf-b909fe7fc6c4\" (UID: \"430198f1-837c-405e-a2cf-b909fe7fc6c4\") " Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.822363 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-utilities" (OuterVolumeSpecName: "utilities") pod "430198f1-837c-405e-a2cf-b909fe7fc6c4" (UID: "430198f1-837c-405e-a2cf-b909fe7fc6c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.827962 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430198f1-837c-405e-a2cf-b909fe7fc6c4-kube-api-access-5tb4z" (OuterVolumeSpecName: "kube-api-access-5tb4z") pod "430198f1-837c-405e-a2cf-b909fe7fc6c4" (UID: "430198f1-837c-405e-a2cf-b909fe7fc6c4"). InnerVolumeSpecName "kube-api-access-5tb4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.923575 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.923609 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tb4z\" (UniqueName: \"kubernetes.io/projected/430198f1-837c-405e-a2cf-b909fe7fc6c4-kube-api-access-5tb4z\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:14 crc kubenswrapper[4727]: I1210 15:07:14.941172 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "430198f1-837c-405e-a2cf-b909fe7fc6c4" (UID: "430198f1-837c-405e-a2cf-b909fe7fc6c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.025797 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430198f1-837c-405e-a2cf-b909fe7fc6c4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.210208 4727 generic.go:334] "Generic (PLEG): container finished" podID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerID="87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698" exitCode=0 Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.210285 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-944r6" event={"ID":"430198f1-837c-405e-a2cf-b909fe7fc6c4","Type":"ContainerDied","Data":"87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698"} Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.210299 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-944r6" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.210320 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-944r6" event={"ID":"430198f1-837c-405e-a2cf-b909fe7fc6c4","Type":"ContainerDied","Data":"ddcb526c88dbe7c007196b86a35df3b063fdb37b440f467d6b2164a706dabf67"} Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.210340 4727 scope.go:117] "RemoveContainer" containerID="87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.244265 4727 scope.go:117] "RemoveContainer" containerID="766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.252218 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-944r6"] Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.263095 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-944r6"] Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.269337 4727 scope.go:117] "RemoveContainer" containerID="9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.320627 4727 scope.go:117] "RemoveContainer" containerID="87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698" Dec 10 15:07:15 crc kubenswrapper[4727]: E1210 15:07:15.321245 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698\": container with ID starting with 87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698 not found: ID does not exist" containerID="87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.321328 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698"} err="failed to get container status \"87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698\": rpc error: code = NotFound desc = could not find container \"87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698\": container with ID starting with 87fa419e28e5ce0ab74be850d3d3425144b3a59be9836053e4acf3a2dfab6698 not found: ID does not exist" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.321357 4727 scope.go:117] "RemoveContainer" containerID="766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90" Dec 10 15:07:15 crc kubenswrapper[4727]: E1210 15:07:15.322108 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90\": container with ID starting with 766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90 not found: ID does not exist" containerID="766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.322159 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90"} err="failed to get container status \"766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90\": rpc error: code = NotFound desc = could not find container \"766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90\": container with ID starting with 766b7833b5cea795549f872840287b5325ff7c6e3f6693ed3cd7529202b52c90 not found: ID does not exist" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.322203 4727 scope.go:117] "RemoveContainer" containerID="9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4" Dec 10 15:07:15 crc kubenswrapper[4727]: E1210 15:07:15.322629 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4\": container with ID starting with 9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4 not found: ID does not exist" containerID="9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4" Dec 10 15:07:15 crc kubenswrapper[4727]: I1210 15:07:15.322690 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4"} err="failed to get container status \"9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4\": rpc error: code = NotFound desc = could not find container \"9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4\": container with ID starting with 9c25c48ae6be7d8ab37e422002d6831928badfcc211ad575f87408edb1eaf0f4 not found: ID does not exist" Dec 10 15:07:16 crc kubenswrapper[4727]: I1210 15:07:16.577428 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" path="/var/lib/kubelet/pods/430198f1-837c-405e-a2cf-b909fe7fc6c4/volumes" Dec 10 15:07:17 crc kubenswrapper[4727]: E1210 15:07:17.564953 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:07:20 crc kubenswrapper[4727]: I1210 15:07:20.584545 4727 scope.go:117] "RemoveContainer" containerID="d3a749fc15d281c6fa8c86fa7be3076db686710d8208fde95c54e7d31c99b7cb" Dec 10 15:07:20 crc kubenswrapper[4727]: I1210 15:07:20.619848 4727 scope.go:117] "RemoveContainer" containerID="d92b4d40fd1384c1b441e23abb935ba80a45e0204cb1ef208abc372e63d83fc4" Dec 10 15:07:20 crc kubenswrapper[4727]: I1210 15:07:20.663375 4727 scope.go:117] "RemoveContainer" containerID="b0971705c837ba1755857e6caa5b76f82e0265ca87fab1c6c9e138470f145e21" Dec 10 15:07:23 crc kubenswrapper[4727]: E1210 15:07:23.565249 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:07:28 crc kubenswrapper[4727]: I1210 15:07:28.052390 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qphqt"] Dec 10 15:07:28 crc kubenswrapper[4727]: I1210 15:07:28.061104 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qphqt"] Dec 10 15:07:28 crc kubenswrapper[4727]: I1210 15:07:28.578351 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d40623-45b4-447b-a237-36d83666ad4d" path="/var/lib/kubelet/pods/63d40623-45b4-447b-a237-36d83666ad4d/volumes" Dec 10 15:07:29 crc kubenswrapper[4727]: I1210 15:07:29.042520 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-vrmr7"] Dec 10 15:07:29 crc kubenswrapper[4727]: I1210 15:07:29.052822 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-vrmr7"] Dec 10 15:07:30 crc kubenswrapper[4727]: I1210 15:07:30.575100 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1d20cc-47dd-4803-a7dd-36e43d2f2d43" path="/var/lib/kubelet/pods/fe1d20cc-47dd-4803-a7dd-36e43d2f2d43/volumes" Dec 10 15:07:31 crc kubenswrapper[4727]: E1210 15:07:31.691740 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:07:31 crc kubenswrapper[4727]: E1210 15:07:31.692035 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:07:31 crc kubenswrapper[4727]: E1210 15:07:31.692202 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:07:31 crc kubenswrapper[4727]: E1210 15:07:31.693443 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:07:32 crc kubenswrapper[4727]: I1210 15:07:32.030311 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-8fgtj"] Dec 10 15:07:32 crc kubenswrapper[4727]: I1210 15:07:32.140228 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-8fgtj"] Dec 10 15:07:32 crc kubenswrapper[4727]: I1210 15:07:32.574787 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb7744ce-28ea-4e2f-a20e-a925b562e221" path="/var/lib/kubelet/pods/eb7744ce-28ea-4e2f-a20e-a925b562e221/volumes" Dec 10 15:07:37 crc kubenswrapper[4727]: E1210 15:07:37.565763 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:07:47 crc kubenswrapper[4727]: E1210 15:07:47.565516 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:07:48 crc kubenswrapper[4727]: I1210 15:07:48.037178 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8s8pt"] Dec 10 15:07:48 crc kubenswrapper[4727]: I1210 15:07:48.050486 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-l9hd6"] Dec 10 15:07:48 crc kubenswrapper[4727]: I1210 15:07:48.073200 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-l9hd6"] Dec 10 15:07:48 crc kubenswrapper[4727]: I1210 15:07:48.089492 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8s8pt"] Dec 10 15:07:48 crc kubenswrapper[4727]: I1210 15:07:48.575767 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="672a3a2e-19cb-4512-a908-c8d6f16753f7" path="/var/lib/kubelet/pods/672a3a2e-19cb-4512-a908-c8d6f16753f7/volumes" Dec 10 15:07:48 crc kubenswrapper[4727]: I1210 15:07:48.576827 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7dc74a-65ab-4440-b4d0-33c102b7baeb" path="/var/lib/kubelet/pods/6a7dc74a-65ab-4440-b4d0-33c102b7baeb/volumes" Dec 10 15:07:51 crc kubenswrapper[4727]: E1210 15:07:51.564998 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:07:58 crc kubenswrapper[4727]: E1210 15:07:58.565956 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:08:05 crc kubenswrapper[4727]: E1210 15:08:05.565791 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:08:07 crc kubenswrapper[4727]: I1210 15:08:07.724465 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:08:07 crc kubenswrapper[4727]: I1210 15:08:07.724852 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:08:10 crc kubenswrapper[4727]: E1210 15:08:10.565529 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:08:12 crc kubenswrapper[4727]: I1210 15:08:12.130897 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-twwbm"] Dec 10 15:08:12 crc kubenswrapper[4727]: I1210 15:08:12.142058 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-twwbm"] Dec 10 15:08:12 crc kubenswrapper[4727]: I1210 15:08:12.576089 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e96924-cc06-48cd-a531-b0d5714f0d1c" path="/var/lib/kubelet/pods/94e96924-cc06-48cd-a531-b0d5714f0d1c/volumes" Dec 10 15:08:18 crc kubenswrapper[4727]: E1210 15:08:18.680533 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:08:18 crc kubenswrapper[4727]: E1210 15:08:18.680610 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:08:18 crc kubenswrapper[4727]: E1210 15:08:18.680774 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:08:18 crc kubenswrapper[4727]: E1210 15:08:18.682012 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:08:20 crc kubenswrapper[4727]: I1210 15:08:20.781768 4727 scope.go:117] "RemoveContainer" containerID="8678a2d1d4114f78beffb7d2645739e471a7dcdab2fa6f00021e307baeffa308" Dec 10 15:08:20 crc kubenswrapper[4727]: I1210 15:08:20.830989 4727 scope.go:117] "RemoveContainer" containerID="2d0d9ae6c91f1f9abf72e1dbe18e52380c19f15ca37e014d42f4fce841265487" Dec 10 15:08:20 crc kubenswrapper[4727]: I1210 15:08:20.871522 4727 scope.go:117] "RemoveContainer" containerID="e185c1c424a3fde4684e68da4b83ef178cbf765cbd64567d8c1ebc2bf36a6f7b" Dec 10 15:08:20 crc kubenswrapper[4727]: I1210 15:08:20.939092 4727 scope.go:117] "RemoveContainer" containerID="117c24f1629e191a6ad2f9a8448ced1e9a50d8ac8f7b8097da25da5915028c21" Dec 10 15:08:20 crc kubenswrapper[4727]: I1210 15:08:20.997638 4727 scope.go:117] "RemoveContainer" containerID="377e15f73a8ebc68b55d8e90d2bd4b962536ef4e72aa4937dfe8cb9909b68bcd" Dec 10 15:08:21 crc kubenswrapper[4727]: I1210 15:08:21.049633 4727 scope.go:117] "RemoveContainer" containerID="d6cdde58b5106b6ac3a3cc4f817fb81b41401c8bf479545683f4e3dee0f2e5e8" Dec 10 15:08:25 crc kubenswrapper[4727]: E1210 15:08:25.566201 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:08:29 crc kubenswrapper[4727]: E1210 15:08:29.566973 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:08:37 crc kubenswrapper[4727]: I1210 15:08:37.724093 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:08:37 crc kubenswrapper[4727]: I1210 15:08:37.724634 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:08:38 crc kubenswrapper[4727]: E1210 15:08:38.565937 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:08:40 crc kubenswrapper[4727]: E1210 15:08:40.564660 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:08:50 crc kubenswrapper[4727]: I1210 15:08:50.055080 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7l2zw"] Dec 10 15:08:50 crc kubenswrapper[4727]: I1210 15:08:50.066695 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7l2zw"] Dec 10 15:08:50 crc kubenswrapper[4727]: I1210 15:08:50.579227 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a" path="/var/lib/kubelet/pods/75fe6f7f-9d01-4dd0-9c63-fd1937ecad6a/volumes" Dec 10 15:08:51 crc kubenswrapper[4727]: I1210 15:08:51.051199 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f3ad-account-create-update-bw2c8"] Dec 10 15:08:51 crc kubenswrapper[4727]: I1210 15:08:51.065291 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8f779"] Dec 10 15:08:51 crc kubenswrapper[4727]: I1210 15:08:51.076712 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8f779"] Dec 10 15:08:51 crc kubenswrapper[4727]: I1210 15:08:51.089161 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f3ad-account-create-update-bw2c8"] Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.032738 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-w96wg"] Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.043014 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b73c-account-create-update-6hb95"] Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.053773 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-23fc-account-create-update-rlss8"] Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.065357 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-w96wg"] Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.073800 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b73c-account-create-update-6hb95"] Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.082456 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-23fc-account-create-update-rlss8"] Dec 10 15:08:52 crc kubenswrapper[4727]: E1210 15:08:52.566214 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.590711 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046deabf-092c-4d2d-bbbf-0526f3a972bd" path="/var/lib/kubelet/pods/046deabf-092c-4d2d-bbbf-0526f3a972bd/volumes" Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.591626 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065d3ade-3310-4073-843a-73b9d05651b0" path="/var/lib/kubelet/pods/065d3ade-3310-4073-843a-73b9d05651b0/volumes" Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.592503 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d431f69-61e4-4dd9-8a34-95a5cbcfc083" path="/var/lib/kubelet/pods/4d431f69-61e4-4dd9-8a34-95a5cbcfc083/volumes" Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.593259 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af8f31d-44ee-43b6-a9c8-2a86f391d33a" path="/var/lib/kubelet/pods/6af8f31d-44ee-43b6-a9c8-2a86f391d33a/volumes" Dec 10 15:08:52 crc kubenswrapper[4727]: I1210 15:08:52.594624 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ce4240-665b-4977-93f0-e0eff335bc4f" path="/var/lib/kubelet/pods/99ce4240-665b-4977-93f0-e0eff335bc4f/volumes" Dec 10 15:08:53 crc kubenswrapper[4727]: E1210 15:08:53.566634 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:09:05 crc kubenswrapper[4727]: E1210 15:09:05.564323 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:09:05 crc kubenswrapper[4727]: E1210 15:09:05.564345 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:09:07 crc kubenswrapper[4727]: I1210 15:09:07.723886 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:09:07 crc kubenswrapper[4727]: I1210 15:09:07.724778 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:09:07 crc kubenswrapper[4727]: I1210 15:09:07.724840 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:09:07 crc kubenswrapper[4727]: I1210 15:09:07.725999 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b5f852b583437ec1b25475f241ec6d146ca628d16e1264f335956ab2c69ec76"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:09:07 crc kubenswrapper[4727]: I1210 15:09:07.726115 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://8b5f852b583437ec1b25475f241ec6d146ca628d16e1264f335956ab2c69ec76" gracePeriod=600 Dec 10 15:09:08 crc kubenswrapper[4727]: I1210 15:09:08.714664 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="8b5f852b583437ec1b25475f241ec6d146ca628d16e1264f335956ab2c69ec76" exitCode=0 Dec 10 15:09:08 crc kubenswrapper[4727]: I1210 15:09:08.714728 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"8b5f852b583437ec1b25475f241ec6d146ca628d16e1264f335956ab2c69ec76"} Dec 10 15:09:08 crc kubenswrapper[4727]: I1210 15:09:08.715704 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750"} Dec 10 15:09:08 crc kubenswrapper[4727]: I1210 15:09:08.715772 4727 scope.go:117] "RemoveContainer" containerID="4fb7c374022e898a445378c50c252f436b3bd4ead423bea5c107fb5f18f67212" Dec 10 15:09:16 crc kubenswrapper[4727]: E1210 15:09:16.574320 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:09:19 crc kubenswrapper[4727]: E1210 15:09:19.565786 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:09:21 crc kubenswrapper[4727]: I1210 15:09:21.192497 4727 scope.go:117] "RemoveContainer" containerID="4d44b3d81e380eb27da22b395fc9827ab8c5a1144de2ebff4eabcc22b3bac304" Dec 10 15:09:21 crc kubenswrapper[4727]: I1210 15:09:21.225068 4727 scope.go:117] "RemoveContainer" containerID="7f895b50a758fc71594c7cbd758c4d91d66a67858f59d64feee90a60bb30f8f7" Dec 10 15:09:21 crc kubenswrapper[4727]: I1210 15:09:21.287372 4727 scope.go:117] "RemoveContainer" containerID="a07469b98163a9585438ca68c1b83d192d29a1881a0d4acc3fc3e1c08bf5570d" Dec 10 15:09:21 crc kubenswrapper[4727]: I1210 15:09:21.347555 4727 scope.go:117] "RemoveContainer" containerID="af37f655d5bc53ea2dbe49783942be71565088d3aea83d169140133012584b0a" Dec 10 15:09:21 crc kubenswrapper[4727]: I1210 15:09:21.399582 4727 scope.go:117] "RemoveContainer" containerID="acfb31b9c3146b37746f2cd3b63f47c13256c4404a149b33243dfa86771e88c7" Dec 10 15:09:21 crc kubenswrapper[4727]: I1210 15:09:21.465361 4727 scope.go:117] "RemoveContainer" containerID="5775d709dd6393a7c9afca9314edc7eefff19f9eddc13d52b253b2c9d4e4aef3" Dec 10 15:09:28 crc kubenswrapper[4727]: E1210 15:09:28.566565 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:09:34 crc kubenswrapper[4727]: E1210 15:09:34.565352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:09:42 crc kubenswrapper[4727]: E1210 15:09:42.566177 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:09:46 crc kubenswrapper[4727]: I1210 15:09:46.065248 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xkld5"] Dec 10 15:09:46 crc kubenswrapper[4727]: I1210 15:09:46.076431 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xkld5"] Dec 10 15:09:46 crc kubenswrapper[4727]: I1210 15:09:46.613343 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26cbc436-b0b3-4961-8a0e-48f797f04b5c" path="/var/lib/kubelet/pods/26cbc436-b0b3-4961-8a0e-48f797f04b5c/volumes" Dec 10 15:09:48 crc kubenswrapper[4727]: E1210 15:09:48.570275 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:09:53 crc kubenswrapper[4727]: I1210 15:09:53.965645 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jq6vr"] Dec 10 15:09:53 crc kubenswrapper[4727]: E1210 15:09:53.966976 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerName="extract-content" Dec 10 15:09:53 crc kubenswrapper[4727]: I1210 15:09:53.966996 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerName="extract-content" Dec 10 15:09:53 crc kubenswrapper[4727]: E1210 15:09:53.967023 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerName="registry-server" Dec 10 15:09:53 crc kubenswrapper[4727]: I1210 15:09:53.967032 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerName="registry-server" Dec 10 15:09:53 crc kubenswrapper[4727]: E1210 15:09:53.967078 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerName="extract-utilities" Dec 10 15:09:53 crc kubenswrapper[4727]: I1210 15:09:53.967088 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerName="extract-utilities" Dec 10 15:09:53 crc kubenswrapper[4727]: I1210 15:09:53.967412 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="430198f1-837c-405e-a2cf-b909fe7fc6c4" containerName="registry-server" Dec 10 15:09:53 crc kubenswrapper[4727]: I1210 15:09:53.969643 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:53 crc kubenswrapper[4727]: I1210 15:09:53.988347 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq6vr"] Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.096337 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qwf2\" (UniqueName: \"kubernetes.io/projected/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-kube-api-access-2qwf2\") pod \"redhat-marketplace-jq6vr\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.096412 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-utilities\") pod \"redhat-marketplace-jq6vr\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.096480 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-catalog-content\") pod \"redhat-marketplace-jq6vr\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.198441 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qwf2\" (UniqueName: \"kubernetes.io/projected/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-kube-api-access-2qwf2\") pod \"redhat-marketplace-jq6vr\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.198544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-utilities\") pod \"redhat-marketplace-jq6vr\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.198648 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-catalog-content\") pod \"redhat-marketplace-jq6vr\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.199223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-catalog-content\") pod \"redhat-marketplace-jq6vr\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.199223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-utilities\") pod \"redhat-marketplace-jq6vr\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.219729 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qwf2\" (UniqueName: \"kubernetes.io/projected/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-kube-api-access-2qwf2\") pod \"redhat-marketplace-jq6vr\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.294246 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:09:54 crc kubenswrapper[4727]: I1210 15:09:54.933854 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq6vr"] Dec 10 15:09:55 crc kubenswrapper[4727]: I1210 15:09:55.187750 4727 generic.go:334] "Generic (PLEG): container finished" podID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerID="f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc" exitCode=0 Dec 10 15:09:55 crc kubenswrapper[4727]: I1210 15:09:55.187849 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq6vr" event={"ID":"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9","Type":"ContainerDied","Data":"f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc"} Dec 10 15:09:55 crc kubenswrapper[4727]: I1210 15:09:55.187889 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq6vr" event={"ID":"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9","Type":"ContainerStarted","Data":"c2c37e586b576f3c4242e6a4f6057d98976aa0948c5476d98cd49755eaaca96c"} Dec 10 15:09:56 crc kubenswrapper[4727]: I1210 15:09:56.199310 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq6vr" event={"ID":"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9","Type":"ContainerStarted","Data":"bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710"} Dec 10 15:09:57 crc kubenswrapper[4727]: I1210 15:09:57.213366 4727 generic.go:334] "Generic (PLEG): container finished" podID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerID="bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710" exitCode=0 Dec 10 15:09:57 crc kubenswrapper[4727]: I1210 15:09:57.213550 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq6vr" event={"ID":"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9","Type":"ContainerDied","Data":"bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710"} Dec 10 15:09:57 crc kubenswrapper[4727]: E1210 15:09:57.569605 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:09:58 crc kubenswrapper[4727]: I1210 15:09:58.226114 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq6vr" event={"ID":"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9","Type":"ContainerStarted","Data":"d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be"} Dec 10 15:10:00 crc kubenswrapper[4727]: E1210 15:10:00.565971 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:10:04 crc kubenswrapper[4727]: I1210 15:10:04.294740 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:10:04 crc kubenswrapper[4727]: I1210 15:10:04.295351 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:10:04 crc kubenswrapper[4727]: I1210 15:10:04.349692 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:10:04 crc kubenswrapper[4727]: I1210 15:10:04.370714 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jq6vr" podStartSLOduration=8.6645225 podStartE2EDuration="11.370696095s" podCreationTimestamp="2025-12-10 15:09:53 +0000 UTC" firstStartedPulling="2025-12-10 15:09:55.191153079 +0000 UTC m=+2299.385927611" lastFinishedPulling="2025-12-10 15:09:57.897326664 +0000 UTC m=+2302.092101206" observedRunningTime="2025-12-10 15:09:58.246814835 +0000 UTC m=+2302.441589377" watchObservedRunningTime="2025-12-10 15:10:04.370696095 +0000 UTC m=+2308.565470637" Dec 10 15:10:05 crc kubenswrapper[4727]: I1210 15:10:05.360356 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:10:05 crc kubenswrapper[4727]: I1210 15:10:05.413188 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq6vr"] Dec 10 15:10:07 crc kubenswrapper[4727]: I1210 15:10:07.327675 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jq6vr" podUID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerName="registry-server" containerID="cri-o://d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be" gracePeriod=2 Dec 10 15:10:07 crc kubenswrapper[4727]: E1210 15:10:07.611382 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d73f45f_77a4_4fa6_9fa5_6de8feb368c9.slice/crio-d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d73f45f_77a4_4fa6_9fa5_6de8feb368c9.slice/crio-conmon-d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:10:07 crc kubenswrapper[4727]: I1210 15:10:07.996556 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.092831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-utilities\") pod \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.093034 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-catalog-content\") pod \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.093297 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qwf2\" (UniqueName: \"kubernetes.io/projected/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-kube-api-access-2qwf2\") pod \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\" (UID: \"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9\") " Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.094733 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-utilities" (OuterVolumeSpecName: "utilities") pod "9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" (UID: "9d73f45f-77a4-4fa6-9fa5-6de8feb368c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.101299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-kube-api-access-2qwf2" (OuterVolumeSpecName: "kube-api-access-2qwf2") pod "9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" (UID: "9d73f45f-77a4-4fa6-9fa5-6de8feb368c9"). InnerVolumeSpecName "kube-api-access-2qwf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.119267 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" (UID: "9d73f45f-77a4-4fa6-9fa5-6de8feb368c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.196110 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.196149 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.196163 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qwf2\" (UniqueName: \"kubernetes.io/projected/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9-kube-api-access-2qwf2\") on node \"crc\" DevicePath \"\"" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.339816 4727 generic.go:334] "Generic (PLEG): container finished" podID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerID="d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be" exitCode=0 Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.339932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq6vr" event={"ID":"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9","Type":"ContainerDied","Data":"d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be"} Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.340144 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq6vr" event={"ID":"9d73f45f-77a4-4fa6-9fa5-6de8feb368c9","Type":"ContainerDied","Data":"c2c37e586b576f3c4242e6a4f6057d98976aa0948c5476d98cd49755eaaca96c"} Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.340171 4727 scope.go:117] "RemoveContainer" containerID="d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.340044 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jq6vr" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.382099 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq6vr"] Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.387057 4727 scope.go:117] "RemoveContainer" containerID="bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.391597 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq6vr"] Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.411030 4727 scope.go:117] "RemoveContainer" containerID="f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.471279 4727 scope.go:117] "RemoveContainer" containerID="d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be" Dec 10 15:10:08 crc kubenswrapper[4727]: E1210 15:10:08.471950 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be\": container with ID starting with d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be not found: ID does not exist" containerID="d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.471997 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be"} err="failed to get container status \"d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be\": rpc error: code = NotFound desc = could not find container \"d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be\": container with ID starting with d2732f80ba1a0d097e7f7e5b43674eaa523bc221b3ee6ecebc419ed6485069be not found: ID does not exist" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.472028 4727 scope.go:117] "RemoveContainer" containerID="bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710" Dec 10 15:10:08 crc kubenswrapper[4727]: E1210 15:10:08.472375 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710\": container with ID starting with bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710 not found: ID does not exist" containerID="bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.472418 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710"} err="failed to get container status \"bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710\": rpc error: code = NotFound desc = could not find container \"bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710\": container with ID starting with bc158d1183363e004be443bf7b6e668a303d5e108e0bac1129c5a59f5c0d4710 not found: ID does not exist" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.472448 4727 scope.go:117] "RemoveContainer" containerID="f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc" Dec 10 15:10:08 crc kubenswrapper[4727]: E1210 15:10:08.472983 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc\": container with ID starting with f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc not found: ID does not exist" containerID="f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.473022 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc"} err="failed to get container status \"f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc\": rpc error: code = NotFound desc = could not find container \"f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc\": container with ID starting with f8e8fe6833aba8e6770e7cc0af091656b283ced2054f78b6cc08668a811d85dc not found: ID does not exist" Dec 10 15:10:08 crc kubenswrapper[4727]: E1210 15:10:08.565624 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:10:08 crc kubenswrapper[4727]: I1210 15:10:08.583090 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" path="/var/lib/kubelet/pods/9d73f45f-77a4-4fa6-9fa5-6de8feb368c9/volumes" Dec 10 15:10:13 crc kubenswrapper[4727]: E1210 15:10:13.567136 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:10:17 crc kubenswrapper[4727]: I1210 15:10:17.045438 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-t2shg"] Dec 10 15:10:17 crc kubenswrapper[4727]: I1210 15:10:17.058616 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-t2shg"] Dec 10 15:10:18 crc kubenswrapper[4727]: I1210 15:10:18.576284 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e777d780-3582-474d-997c-6bb3f1b108da" path="/var/lib/kubelet/pods/e777d780-3582-474d-997c-6bb3f1b108da/volumes" Dec 10 15:10:20 crc kubenswrapper[4727]: E1210 15:10:20.565493 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:10:21 crc kubenswrapper[4727]: I1210 15:10:21.638740 4727 scope.go:117] "RemoveContainer" containerID="5cc57ba2b2e184270eb446c31167b9ef0b6acc658255c2e37103bb617db09f61" Dec 10 15:10:21 crc kubenswrapper[4727]: I1210 15:10:21.708798 4727 scope.go:117] "RemoveContainer" containerID="c50ae70f186efe22dab02c813a21767ff112bf28193397173c172d84ff2eda35" Dec 10 15:10:27 crc kubenswrapper[4727]: E1210 15:10:27.565529 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:10:34 crc kubenswrapper[4727]: E1210 15:10:34.565499 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:10:42 crc kubenswrapper[4727]: E1210 15:10:42.564889 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:10:45 crc kubenswrapper[4727]: E1210 15:10:45.565476 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:10:54 crc kubenswrapper[4727]: E1210 15:10:54.565786 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:10:58 crc kubenswrapper[4727]: E1210 15:10:58.565522 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:11:07 crc kubenswrapper[4727]: E1210 15:11:07.565832 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:11:09 crc kubenswrapper[4727]: E1210 15:11:09.565318 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:11:12 crc kubenswrapper[4727]: I1210 15:11:12.055784 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5d4rn"] Dec 10 15:11:12 crc kubenswrapper[4727]: I1210 15:11:12.068514 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5d4rn"] Dec 10 15:11:12 crc kubenswrapper[4727]: I1210 15:11:12.578811 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f00dd75e-d42e-41fa-93f2-728409ffcb47" path="/var/lib/kubelet/pods/f00dd75e-d42e-41fa-93f2-728409ffcb47/volumes" Dec 10 15:11:21 crc kubenswrapper[4727]: I1210 15:11:21.854937 4727 scope.go:117] "RemoveContainer" containerID="2c30ce2bb5b57642f62585f23a3a6eca2a90960fd0dca6100f3e33d90499b830" Dec 10 15:11:22 crc kubenswrapper[4727]: E1210 15:11:22.565114 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:11:23 crc kubenswrapper[4727]: E1210 15:11:23.565147 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:11:34 crc kubenswrapper[4727]: I1210 15:11:34.050180 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sv8qp"] Dec 10 15:11:34 crc kubenswrapper[4727]: I1210 15:11:34.060727 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sv8qp"] Dec 10 15:11:34 crc kubenswrapper[4727]: I1210 15:11:34.576954 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60ac517-553a-4ca2-a7f5-b2617ce048f6" path="/var/lib/kubelet/pods/b60ac517-553a-4ca2-a7f5-b2617ce048f6/volumes" Dec 10 15:11:35 crc kubenswrapper[4727]: E1210 15:11:35.566177 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:11:36 crc kubenswrapper[4727]: E1210 15:11:36.607675 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:11:37 crc kubenswrapper[4727]: I1210 15:11:37.723799 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:11:37 crc kubenswrapper[4727]: I1210 15:11:37.724263 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:11:47 crc kubenswrapper[4727]: E1210 15:11:47.567331 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:11:48 crc kubenswrapper[4727]: E1210 15:11:48.566705 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:11:59 crc kubenswrapper[4727]: E1210 15:11:59.565600 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:12:02 crc kubenswrapper[4727]: E1210 15:12:02.568167 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:12:07 crc kubenswrapper[4727]: I1210 15:12:07.723499 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:12:07 crc kubenswrapper[4727]: I1210 15:12:07.724059 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:12:14 crc kubenswrapper[4727]: E1210 15:12:14.565953 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:12:16 crc kubenswrapper[4727]: E1210 15:12:16.573529 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:12:22 crc kubenswrapper[4727]: I1210 15:12:22.026949 4727 scope.go:117] "RemoveContainer" containerID="fcd47e44c0fde09bdb321662c78e09baa0175b7fd0774e4ffc81adce4f34fe98" Dec 10 15:12:27 crc kubenswrapper[4727]: E1210 15:12:27.565706 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:12:28 crc kubenswrapper[4727]: E1210 15:12:28.566699 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:12:37 crc kubenswrapper[4727]: I1210 15:12:37.723726 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:12:37 crc kubenswrapper[4727]: I1210 15:12:37.724349 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:12:37 crc kubenswrapper[4727]: I1210 15:12:37.724401 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:12:37 crc kubenswrapper[4727]: I1210 15:12:37.725355 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:12:37 crc kubenswrapper[4727]: I1210 15:12:37.725418 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" gracePeriod=600 Dec 10 15:12:37 crc kubenswrapper[4727]: E1210 15:12:37.854956 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:12:38 crc kubenswrapper[4727]: E1210 15:12:38.565858 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:12:38 crc kubenswrapper[4727]: I1210 15:12:38.855943 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" exitCode=0 Dec 10 15:12:38 crc kubenswrapper[4727]: I1210 15:12:38.856017 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750"} Dec 10 15:12:38 crc kubenswrapper[4727]: I1210 15:12:38.856135 4727 scope.go:117] "RemoveContainer" containerID="8b5f852b583437ec1b25475f241ec6d146ca628d16e1264f335956ab2c69ec76" Dec 10 15:12:38 crc kubenswrapper[4727]: I1210 15:12:38.857270 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:12:38 crc kubenswrapper[4727]: E1210 15:12:38.857732 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:12:41 crc kubenswrapper[4727]: I1210 15:12:41.567263 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:12:41 crc kubenswrapper[4727]: E1210 15:12:41.703489 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:12:41 crc kubenswrapper[4727]: E1210 15:12:41.703560 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:12:41 crc kubenswrapper[4727]: E1210 15:12:41.703741 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:12:41 crc kubenswrapper[4727]: E1210 15:12:41.705173 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:12:49 crc kubenswrapper[4727]: E1210 15:12:49.566792 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:12:52 crc kubenswrapper[4727]: I1210 15:12:52.564278 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:12:52 crc kubenswrapper[4727]: E1210 15:12:52.565852 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:12:55 crc kubenswrapper[4727]: E1210 15:12:55.566853 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:13:00 crc kubenswrapper[4727]: E1210 15:13:00.565455 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:13:06 crc kubenswrapper[4727]: I1210 15:13:06.571483 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:13:06 crc kubenswrapper[4727]: E1210 15:13:06.572202 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:13:09 crc kubenswrapper[4727]: E1210 15:13:09.565356 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:13:15 crc kubenswrapper[4727]: E1210 15:13:15.565715 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:13:18 crc kubenswrapper[4727]: I1210 15:13:18.564126 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:13:18 crc kubenswrapper[4727]: E1210 15:13:18.564632 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:13:23 crc kubenswrapper[4727]: E1210 15:13:23.568330 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.806326 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mx7c2"] Dec 10 15:13:26 crc kubenswrapper[4727]: E1210 15:13:26.807302 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerName="extract-content" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.807322 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerName="extract-content" Dec 10 15:13:26 crc kubenswrapper[4727]: E1210 15:13:26.807376 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerName="registry-server" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.807386 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerName="registry-server" Dec 10 15:13:26 crc kubenswrapper[4727]: E1210 15:13:26.807406 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerName="extract-utilities" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.807415 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerName="extract-utilities" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.807730 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d73f45f-77a4-4fa6-9fa5-6de8feb368c9" containerName="registry-server" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.809964 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.820256 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mx7c2"] Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.830785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8t5s\" (UniqueName: \"kubernetes.io/projected/44b2869f-2526-484c-b2e3-8f5fde277391-kube-api-access-w8t5s\") pod \"certified-operators-mx7c2\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.831263 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-utilities\") pod \"certified-operators-mx7c2\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.831412 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-catalog-content\") pod \"certified-operators-mx7c2\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:26 crc kubenswrapper[4727]: E1210 15:13:26.892653 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:13:26 crc kubenswrapper[4727]: E1210 15:13:26.893079 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:13:26 crc kubenswrapper[4727]: E1210 15:13:26.893287 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:13:26 crc kubenswrapper[4727]: E1210 15:13:26.894764 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.933218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8t5s\" (UniqueName: \"kubernetes.io/projected/44b2869f-2526-484c-b2e3-8f5fde277391-kube-api-access-w8t5s\") pod \"certified-operators-mx7c2\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.933368 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-utilities\") pod \"certified-operators-mx7c2\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.933430 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-catalog-content\") pod \"certified-operators-mx7c2\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.934104 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-catalog-content\") pod \"certified-operators-mx7c2\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.934113 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-utilities\") pod \"certified-operators-mx7c2\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:26 crc kubenswrapper[4727]: I1210 15:13:26.961426 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8t5s\" (UniqueName: \"kubernetes.io/projected/44b2869f-2526-484c-b2e3-8f5fde277391-kube-api-access-w8t5s\") pod \"certified-operators-mx7c2\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:27 crc kubenswrapper[4727]: I1210 15:13:27.134721 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:27 crc kubenswrapper[4727]: I1210 15:13:27.662687 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mx7c2"] Dec 10 15:13:28 crc kubenswrapper[4727]: I1210 15:13:28.376230 4727 generic.go:334] "Generic (PLEG): container finished" podID="44b2869f-2526-484c-b2e3-8f5fde277391" containerID="dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed" exitCode=0 Dec 10 15:13:28 crc kubenswrapper[4727]: I1210 15:13:28.376345 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mx7c2" event={"ID":"44b2869f-2526-484c-b2e3-8f5fde277391","Type":"ContainerDied","Data":"dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed"} Dec 10 15:13:28 crc kubenswrapper[4727]: I1210 15:13:28.376528 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mx7c2" event={"ID":"44b2869f-2526-484c-b2e3-8f5fde277391","Type":"ContainerStarted","Data":"1ab44cab69f0cbcb0b9ecfc3a8236466ca10946d5b7f35da15478f145f44f00f"} Dec 10 15:13:29 crc kubenswrapper[4727]: I1210 15:13:29.387211 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mx7c2" event={"ID":"44b2869f-2526-484c-b2e3-8f5fde277391","Type":"ContainerStarted","Data":"ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f"} Dec 10 15:13:32 crc kubenswrapper[4727]: I1210 15:13:32.420607 4727 generic.go:334] "Generic (PLEG): container finished" podID="44b2869f-2526-484c-b2e3-8f5fde277391" containerID="ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f" exitCode=0 Dec 10 15:13:32 crc kubenswrapper[4727]: I1210 15:13:32.420702 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mx7c2" event={"ID":"44b2869f-2526-484c-b2e3-8f5fde277391","Type":"ContainerDied","Data":"ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f"} Dec 10 15:13:33 crc kubenswrapper[4727]: I1210 15:13:33.434128 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mx7c2" event={"ID":"44b2869f-2526-484c-b2e3-8f5fde277391","Type":"ContainerStarted","Data":"9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893"} Dec 10 15:13:33 crc kubenswrapper[4727]: I1210 15:13:33.455836 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mx7c2" podStartSLOduration=2.984745788 podStartE2EDuration="7.455818733s" podCreationTimestamp="2025-12-10 15:13:26 +0000 UTC" firstStartedPulling="2025-12-10 15:13:28.378667515 +0000 UTC m=+2512.573442057" lastFinishedPulling="2025-12-10 15:13:32.84974046 +0000 UTC m=+2517.044515002" observedRunningTime="2025-12-10 15:13:33.451786611 +0000 UTC m=+2517.646561153" watchObservedRunningTime="2025-12-10 15:13:33.455818733 +0000 UTC m=+2517.650593275" Dec 10 15:13:33 crc kubenswrapper[4727]: I1210 15:13:33.563619 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:13:33 crc kubenswrapper[4727]: E1210 15:13:33.563969 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:13:36 crc kubenswrapper[4727]: E1210 15:13:36.572175 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:13:37 crc kubenswrapper[4727]: I1210 15:13:37.135194 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:37 crc kubenswrapper[4727]: I1210 15:13:37.138834 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:37 crc kubenswrapper[4727]: I1210 15:13:37.196518 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:38 crc kubenswrapper[4727]: I1210 15:13:38.525578 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:38 crc kubenswrapper[4727]: I1210 15:13:38.587733 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mx7c2"] Dec 10 15:13:40 crc kubenswrapper[4727]: I1210 15:13:40.514283 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mx7c2" podUID="44b2869f-2526-484c-b2e3-8f5fde277391" containerName="registry-server" containerID="cri-o://9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893" gracePeriod=2 Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.108401 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.129704 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-catalog-content\") pod \"44b2869f-2526-484c-b2e3-8f5fde277391\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.129968 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-utilities\") pod \"44b2869f-2526-484c-b2e3-8f5fde277391\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.130090 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8t5s\" (UniqueName: \"kubernetes.io/projected/44b2869f-2526-484c-b2e3-8f5fde277391-kube-api-access-w8t5s\") pod \"44b2869f-2526-484c-b2e3-8f5fde277391\" (UID: \"44b2869f-2526-484c-b2e3-8f5fde277391\") " Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.130867 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-utilities" (OuterVolumeSpecName: "utilities") pod "44b2869f-2526-484c-b2e3-8f5fde277391" (UID: "44b2869f-2526-484c-b2e3-8f5fde277391"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.136109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b2869f-2526-484c-b2e3-8f5fde277391-kube-api-access-w8t5s" (OuterVolumeSpecName: "kube-api-access-w8t5s") pod "44b2869f-2526-484c-b2e3-8f5fde277391" (UID: "44b2869f-2526-484c-b2e3-8f5fde277391"). InnerVolumeSpecName "kube-api-access-w8t5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.205457 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44b2869f-2526-484c-b2e3-8f5fde277391" (UID: "44b2869f-2526-484c-b2e3-8f5fde277391"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.232512 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.232546 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8t5s\" (UniqueName: \"kubernetes.io/projected/44b2869f-2526-484c-b2e3-8f5fde277391-kube-api-access-w8t5s\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.232556 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44b2869f-2526-484c-b2e3-8f5fde277391-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.525536 4727 generic.go:334] "Generic (PLEG): container finished" podID="44b2869f-2526-484c-b2e3-8f5fde277391" containerID="9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893" exitCode=0 Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.525591 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mx7c2" event={"ID":"44b2869f-2526-484c-b2e3-8f5fde277391","Type":"ContainerDied","Data":"9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893"} Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.525601 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mx7c2" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.525630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mx7c2" event={"ID":"44b2869f-2526-484c-b2e3-8f5fde277391","Type":"ContainerDied","Data":"1ab44cab69f0cbcb0b9ecfc3a8236466ca10946d5b7f35da15478f145f44f00f"} Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.525672 4727 scope.go:117] "RemoveContainer" containerID="9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.558157 4727 scope.go:117] "RemoveContainer" containerID="ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f" Dec 10 15:13:41 crc kubenswrapper[4727]: E1210 15:13:41.565368 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.565862 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mx7c2"] Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.576596 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mx7c2"] Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.604518 4727 scope.go:117] "RemoveContainer" containerID="dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.630874 4727 scope.go:117] "RemoveContainer" containerID="9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893" Dec 10 15:13:41 crc kubenswrapper[4727]: E1210 15:13:41.631325 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893\": container with ID starting with 9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893 not found: ID does not exist" containerID="9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.631365 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893"} err="failed to get container status \"9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893\": rpc error: code = NotFound desc = could not find container \"9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893\": container with ID starting with 9900a7d38fd1109e204e603acdc7e1b26fdcd14ed8b099e3f0d4e6abee0a8893 not found: ID does not exist" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.631398 4727 scope.go:117] "RemoveContainer" containerID="ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f" Dec 10 15:13:41 crc kubenswrapper[4727]: E1210 15:13:41.631751 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f\": container with ID starting with ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f not found: ID does not exist" containerID="ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.631774 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f"} err="failed to get container status \"ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f\": rpc error: code = NotFound desc = could not find container \"ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f\": container with ID starting with ad6dd599b1c4d91590a8f882534d92fb7a63613b54563a2288167c9caa9e3a2f not found: ID does not exist" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.631789 4727 scope.go:117] "RemoveContainer" containerID="dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed" Dec 10 15:13:41 crc kubenswrapper[4727]: E1210 15:13:41.632294 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed\": container with ID starting with dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed not found: ID does not exist" containerID="dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed" Dec 10 15:13:41 crc kubenswrapper[4727]: I1210 15:13:41.632316 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed"} err="failed to get container status \"dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed\": rpc error: code = NotFound desc = could not find container \"dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed\": container with ID starting with dc270d81f5d9a3cf551069e921e7674aa617e6d4e7b1a1ddb8f473442271daed not found: ID does not exist" Dec 10 15:13:42 crc kubenswrapper[4727]: I1210 15:13:42.576702 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44b2869f-2526-484c-b2e3-8f5fde277391" path="/var/lib/kubelet/pods/44b2869f-2526-484c-b2e3-8f5fde277391/volumes" Dec 10 15:13:44 crc kubenswrapper[4727]: I1210 15:13:44.563526 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:13:44 crc kubenswrapper[4727]: E1210 15:13:44.564267 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:13:50 crc kubenswrapper[4727]: E1210 15:13:50.566294 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:13:52 crc kubenswrapper[4727]: E1210 15:13:52.566816 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:13:57 crc kubenswrapper[4727]: I1210 15:13:57.564311 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:13:57 crc kubenswrapper[4727]: E1210 15:13:57.565280 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:14:04 crc kubenswrapper[4727]: E1210 15:14:04.565045 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:14:06 crc kubenswrapper[4727]: E1210 15:14:06.574412 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:14:10 crc kubenswrapper[4727]: I1210 15:14:10.564258 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:14:10 crc kubenswrapper[4727]: E1210 15:14:10.565860 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:14:17 crc kubenswrapper[4727]: E1210 15:14:17.566074 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:14:18 crc kubenswrapper[4727]: E1210 15:14:18.565813 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:14:21 crc kubenswrapper[4727]: I1210 15:14:21.563025 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:14:21 crc kubenswrapper[4727]: E1210 15:14:21.563618 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:14:30 crc kubenswrapper[4727]: I1210 15:14:30.106255 4727 generic.go:334] "Generic (PLEG): container finished" podID="7b7cce52-493a-4a84-a51a-768d6d40d69d" containerID="0b0d9211fb577bcad5c9b15f086fee04cf88d8ca192772cd6dbd62c12f4be297" exitCode=2 Dec 10 15:14:30 crc kubenswrapper[4727]: I1210 15:14:30.106341 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" event={"ID":"7b7cce52-493a-4a84-a51a-768d6d40d69d","Type":"ContainerDied","Data":"0b0d9211fb577bcad5c9b15f086fee04cf88d8ca192772cd6dbd62c12f4be297"} Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.647654 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.751469 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq4mq\" (UniqueName: \"kubernetes.io/projected/7b7cce52-493a-4a84-a51a-768d6d40d69d-kube-api-access-dq4mq\") pod \"7b7cce52-493a-4a84-a51a-768d6d40d69d\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.751662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-ssh-key\") pod \"7b7cce52-493a-4a84-a51a-768d6d40d69d\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.751794 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-inventory\") pod \"7b7cce52-493a-4a84-a51a-768d6d40d69d\" (UID: \"7b7cce52-493a-4a84-a51a-768d6d40d69d\") " Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.758703 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7cce52-493a-4a84-a51a-768d6d40d69d-kube-api-access-dq4mq" (OuterVolumeSpecName: "kube-api-access-dq4mq") pod "7b7cce52-493a-4a84-a51a-768d6d40d69d" (UID: "7b7cce52-493a-4a84-a51a-768d6d40d69d"). InnerVolumeSpecName "kube-api-access-dq4mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.788829 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7b7cce52-493a-4a84-a51a-768d6d40d69d" (UID: "7b7cce52-493a-4a84-a51a-768d6d40d69d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.797928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-inventory" (OuterVolumeSpecName: "inventory") pod "7b7cce52-493a-4a84-a51a-768d6d40d69d" (UID: "7b7cce52-493a-4a84-a51a-768d6d40d69d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.854981 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.855045 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq4mq\" (UniqueName: \"kubernetes.io/projected/7b7cce52-493a-4a84-a51a-768d6d40d69d-kube-api-access-dq4mq\") on node \"crc\" DevicePath \"\"" Dec 10 15:14:31 crc kubenswrapper[4727]: I1210 15:14:31.855062 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7cce52-493a-4a84-a51a-768d6d40d69d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:14:32 crc kubenswrapper[4727]: I1210 15:14:32.127614 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" event={"ID":"7b7cce52-493a-4a84-a51a-768d6d40d69d","Type":"ContainerDied","Data":"41b69c4e7ca14b12a57bf4f0dcf5ef72057287a2b6b5f9c5746e93ca6ee8cb42"} Dec 10 15:14:32 crc kubenswrapper[4727]: I1210 15:14:32.127998 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b69c4e7ca14b12a57bf4f0dcf5ef72057287a2b6b5f9c5746e93ca6ee8cb42" Dec 10 15:14:32 crc kubenswrapper[4727]: I1210 15:14:32.127647 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z594q" Dec 10 15:14:32 crc kubenswrapper[4727]: E1210 15:14:32.566208 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:14:33 crc kubenswrapper[4727]: E1210 15:14:33.565020 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:14:34 crc kubenswrapper[4727]: I1210 15:14:34.563521 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:14:34 crc kubenswrapper[4727]: E1210 15:14:34.564315 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.031835 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr"] Dec 10 15:14:40 crc kubenswrapper[4727]: E1210 15:14:40.033118 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b2869f-2526-484c-b2e3-8f5fde277391" containerName="extract-utilities" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.033140 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b2869f-2526-484c-b2e3-8f5fde277391" containerName="extract-utilities" Dec 10 15:14:40 crc kubenswrapper[4727]: E1210 15:14:40.033159 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7cce52-493a-4a84-a51a-768d6d40d69d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.033171 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7cce52-493a-4a84-a51a-768d6d40d69d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:14:40 crc kubenswrapper[4727]: E1210 15:14:40.033196 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b2869f-2526-484c-b2e3-8f5fde277391" containerName="extract-content" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.033230 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b2869f-2526-484c-b2e3-8f5fde277391" containerName="extract-content" Dec 10 15:14:40 crc kubenswrapper[4727]: E1210 15:14:40.033254 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b2869f-2526-484c-b2e3-8f5fde277391" containerName="registry-server" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.033264 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b2869f-2526-484c-b2e3-8f5fde277391" containerName="registry-server" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.033567 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7cce52-493a-4a84-a51a-768d6d40d69d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.033586 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b2869f-2526-484c-b2e3-8f5fde277391" containerName="registry-server" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.034829 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.040333 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.040467 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.040877 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.041082 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.055618 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr"] Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.151730 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkkh\" (UniqueName: \"kubernetes.io/projected/08f2ae25-5b39-416e-9f25-830650cc91d0-kube-api-access-xbkkh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b76mr\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.152505 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b76mr\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.152636 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b76mr\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.254599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b76mr\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.254685 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b76mr\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.254825 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbkkh\" (UniqueName: \"kubernetes.io/projected/08f2ae25-5b39-416e-9f25-830650cc91d0-kube-api-access-xbkkh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b76mr\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.261000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b76mr\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.264348 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b76mr\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.273002 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbkkh\" (UniqueName: \"kubernetes.io/projected/08f2ae25-5b39-416e-9f25-830650cc91d0-kube-api-access-xbkkh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b76mr\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:40 crc kubenswrapper[4727]: I1210 15:14:40.366476 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:14:41 crc kubenswrapper[4727]: I1210 15:14:41.011568 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr"] Dec 10 15:14:41 crc kubenswrapper[4727]: I1210 15:14:41.224324 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" event={"ID":"08f2ae25-5b39-416e-9f25-830650cc91d0","Type":"ContainerStarted","Data":"e052c6b727e48186429ab6fb2ec8d4d1eeebccdf7b878cfd51a6a37b2577a43f"} Dec 10 15:14:42 crc kubenswrapper[4727]: I1210 15:14:42.239824 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" event={"ID":"08f2ae25-5b39-416e-9f25-830650cc91d0","Type":"ContainerStarted","Data":"eff88eaed795c8b287d8a8e53cfd0bb594c960a1263d4344199892e0b8745046"} Dec 10 15:14:42 crc kubenswrapper[4727]: I1210 15:14:42.262420 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" podStartSLOduration=1.7995442289999999 podStartE2EDuration="2.262393911s" podCreationTimestamp="2025-12-10 15:14:40 +0000 UTC" firstStartedPulling="2025-12-10 15:14:41.0096352 +0000 UTC m=+2585.204409742" lastFinishedPulling="2025-12-10 15:14:41.472484882 +0000 UTC m=+2585.667259424" observedRunningTime="2025-12-10 15:14:42.255678932 +0000 UTC m=+2586.450453474" watchObservedRunningTime="2025-12-10 15:14:42.262393911 +0000 UTC m=+2586.457168453" Dec 10 15:14:45 crc kubenswrapper[4727]: E1210 15:14:45.565997 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:14:47 crc kubenswrapper[4727]: I1210 15:14:47.562364 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:14:47 crc kubenswrapper[4727]: E1210 15:14:47.562970 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:14:48 crc kubenswrapper[4727]: E1210 15:14:48.565249 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:14:59 crc kubenswrapper[4727]: I1210 15:14:59.563543 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:14:59 crc kubenswrapper[4727]: E1210 15:14:59.564292 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.156902 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4"] Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.159440 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.163362 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.163629 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.170970 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4"] Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.334467 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cstl\" (UniqueName: \"kubernetes.io/projected/d5d51a83-b933-4957-8890-e7141b841a89-kube-api-access-7cstl\") pod \"collect-profiles-29422995-gtzc4\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.334612 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5d51a83-b933-4957-8890-e7141b841a89-config-volume\") pod \"collect-profiles-29422995-gtzc4\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.334650 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5d51a83-b933-4957-8890-e7141b841a89-secret-volume\") pod \"collect-profiles-29422995-gtzc4\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.437774 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cstl\" (UniqueName: \"kubernetes.io/projected/d5d51a83-b933-4957-8890-e7141b841a89-kube-api-access-7cstl\") pod \"collect-profiles-29422995-gtzc4\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.437873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5d51a83-b933-4957-8890-e7141b841a89-config-volume\") pod \"collect-profiles-29422995-gtzc4\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.437930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5d51a83-b933-4957-8890-e7141b841a89-secret-volume\") pod \"collect-profiles-29422995-gtzc4\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.439169 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5d51a83-b933-4957-8890-e7141b841a89-config-volume\") pod \"collect-profiles-29422995-gtzc4\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.444859 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5d51a83-b933-4957-8890-e7141b841a89-secret-volume\") pod \"collect-profiles-29422995-gtzc4\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.460122 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cstl\" (UniqueName: \"kubernetes.io/projected/d5d51a83-b933-4957-8890-e7141b841a89-kube-api-access-7cstl\") pod \"collect-profiles-29422995-gtzc4\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.491447 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:00 crc kubenswrapper[4727]: E1210 15:15:00.566394 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:15:00 crc kubenswrapper[4727]: I1210 15:15:00.966900 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4"] Dec 10 15:15:01 crc kubenswrapper[4727]: I1210 15:15:01.527484 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" event={"ID":"d5d51a83-b933-4957-8890-e7141b841a89","Type":"ContainerStarted","Data":"104f5c00a6433b70ee0fbdb99cbd1a921826a10a6cf0ddf89482527bad47e496"} Dec 10 15:15:01 crc kubenswrapper[4727]: I1210 15:15:01.527849 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" event={"ID":"d5d51a83-b933-4957-8890-e7141b841a89","Type":"ContainerStarted","Data":"7c7354ec2461d243d5c3a1c4d7b56316bf6212d36e81b94c7a9d9ae510b22b1a"} Dec 10 15:15:02 crc kubenswrapper[4727]: I1210 15:15:02.543396 4727 generic.go:334] "Generic (PLEG): container finished" podID="d5d51a83-b933-4957-8890-e7141b841a89" containerID="104f5c00a6433b70ee0fbdb99cbd1a921826a10a6cf0ddf89482527bad47e496" exitCode=0 Dec 10 15:15:02 crc kubenswrapper[4727]: I1210 15:15:02.543468 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" event={"ID":"d5d51a83-b933-4957-8890-e7141b841a89","Type":"ContainerDied","Data":"104f5c00a6433b70ee0fbdb99cbd1a921826a10a6cf0ddf89482527bad47e496"} Dec 10 15:15:02 crc kubenswrapper[4727]: E1210 15:15:02.567633 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:15:03 crc kubenswrapper[4727]: I1210 15:15:03.973632 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.027127 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5d51a83-b933-4957-8890-e7141b841a89-secret-volume\") pod \"d5d51a83-b933-4957-8890-e7141b841a89\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.027339 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cstl\" (UniqueName: \"kubernetes.io/projected/d5d51a83-b933-4957-8890-e7141b841a89-kube-api-access-7cstl\") pod \"d5d51a83-b933-4957-8890-e7141b841a89\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.027800 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5d51a83-b933-4957-8890-e7141b841a89-config-volume\") pod \"d5d51a83-b933-4957-8890-e7141b841a89\" (UID: \"d5d51a83-b933-4957-8890-e7141b841a89\") " Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.028515 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d51a83-b933-4957-8890-e7141b841a89-config-volume" (OuterVolumeSpecName: "config-volume") pod "d5d51a83-b933-4957-8890-e7141b841a89" (UID: "d5d51a83-b933-4957-8890-e7141b841a89"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.028867 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5d51a83-b933-4957-8890-e7141b841a89-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.034271 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d51a83-b933-4957-8890-e7141b841a89-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d5d51a83-b933-4957-8890-e7141b841a89" (UID: "d5d51a83-b933-4957-8890-e7141b841a89"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.051559 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d51a83-b933-4957-8890-e7141b841a89-kube-api-access-7cstl" (OuterVolumeSpecName: "kube-api-access-7cstl") pod "d5d51a83-b933-4957-8890-e7141b841a89" (UID: "d5d51a83-b933-4957-8890-e7141b841a89"). InnerVolumeSpecName "kube-api-access-7cstl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.130987 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5d51a83-b933-4957-8890-e7141b841a89-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.131038 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cstl\" (UniqueName: \"kubernetes.io/projected/d5d51a83-b933-4957-8890-e7141b841a89-kube-api-access-7cstl\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.567339 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.578280 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4" event={"ID":"d5d51a83-b933-4957-8890-e7141b841a89","Type":"ContainerDied","Data":"7c7354ec2461d243d5c3a1c4d7b56316bf6212d36e81b94c7a9d9ae510b22b1a"} Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.578574 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c7354ec2461d243d5c3a1c4d7b56316bf6212d36e81b94c7a9d9ae510b22b1a" Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.647475 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf"] Dec 10 15:15:04 crc kubenswrapper[4727]: I1210 15:15:04.658832 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-2hhnf"] Dec 10 15:15:06 crc kubenswrapper[4727]: I1210 15:15:06.578422 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc615bdc-da08-4680-afa5-d500f597d18b" path="/var/lib/kubelet/pods/dc615bdc-da08-4680-afa5-d500f597d18b/volumes" Dec 10 15:15:12 crc kubenswrapper[4727]: I1210 15:15:12.563470 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:15:12 crc kubenswrapper[4727]: E1210 15:15:12.564944 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:15:13 crc kubenswrapper[4727]: E1210 15:15:13.565447 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:15:13 crc kubenswrapper[4727]: E1210 15:15:13.566250 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:15:22 crc kubenswrapper[4727]: I1210 15:15:22.197217 4727 scope.go:117] "RemoveContainer" containerID="362c86ff2ae56d728e14e21cff89033742d4381bf06ebf12435a98a04e511c4b" Dec 10 15:15:23 crc kubenswrapper[4727]: I1210 15:15:23.563015 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:15:23 crc kubenswrapper[4727]: E1210 15:15:23.563513 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:15:27 crc kubenswrapper[4727]: E1210 15:15:27.566018 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:15:28 crc kubenswrapper[4727]: E1210 15:15:28.565763 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:15:37 crc kubenswrapper[4727]: I1210 15:15:37.564193 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:15:37 crc kubenswrapper[4727]: E1210 15:15:37.565095 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:15:38 crc kubenswrapper[4727]: E1210 15:15:38.566019 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:15:42 crc kubenswrapper[4727]: E1210 15:15:42.586830 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.730115 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vw9lc"] Dec 10 15:15:49 crc kubenswrapper[4727]: E1210 15:15:49.731513 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d51a83-b933-4957-8890-e7141b841a89" containerName="collect-profiles" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.731532 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d51a83-b933-4957-8890-e7141b841a89" containerName="collect-profiles" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.731936 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d51a83-b933-4957-8890-e7141b841a89" containerName="collect-profiles" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.734349 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.745892 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vw9lc"] Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.812863 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmppl\" (UniqueName: \"kubernetes.io/projected/b18c865c-2ed4-4c41-b116-f97d6f1ef470-kube-api-access-dmppl\") pod \"community-operators-vw9lc\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.812980 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-catalog-content\") pod \"community-operators-vw9lc\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.813268 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-utilities\") pod \"community-operators-vw9lc\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.915322 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-utilities\") pod \"community-operators-vw9lc\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.915426 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmppl\" (UniqueName: \"kubernetes.io/projected/b18c865c-2ed4-4c41-b116-f97d6f1ef470-kube-api-access-dmppl\") pod \"community-operators-vw9lc\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.915465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-catalog-content\") pod \"community-operators-vw9lc\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.915893 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-utilities\") pod \"community-operators-vw9lc\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.916048 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-catalog-content\") pod \"community-operators-vw9lc\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:49 crc kubenswrapper[4727]: I1210 15:15:49.938617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmppl\" (UniqueName: \"kubernetes.io/projected/b18c865c-2ed4-4c41-b116-f97d6f1ef470-kube-api-access-dmppl\") pod \"community-operators-vw9lc\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:50 crc kubenswrapper[4727]: I1210 15:15:50.066738 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:15:50 crc kubenswrapper[4727]: I1210 15:15:50.874737 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vw9lc"] Dec 10 15:15:51 crc kubenswrapper[4727]: I1210 15:15:51.230278 4727 generic.go:334] "Generic (PLEG): container finished" podID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerID="4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3" exitCode=0 Dec 10 15:15:51 crc kubenswrapper[4727]: I1210 15:15:51.230330 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw9lc" event={"ID":"b18c865c-2ed4-4c41-b116-f97d6f1ef470","Type":"ContainerDied","Data":"4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3"} Dec 10 15:15:51 crc kubenswrapper[4727]: I1210 15:15:51.230363 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw9lc" event={"ID":"b18c865c-2ed4-4c41-b116-f97d6f1ef470","Type":"ContainerStarted","Data":"4424affe82ca817ef97197ec396598064900f0c6be4b5a92daeec1e5ff0c1235"} Dec 10 15:15:51 crc kubenswrapper[4727]: I1210 15:15:51.562569 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:15:51 crc kubenswrapper[4727]: E1210 15:15:51.562869 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:15:52 crc kubenswrapper[4727]: E1210 15:15:52.565334 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:15:53 crc kubenswrapper[4727]: I1210 15:15:53.254624 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw9lc" event={"ID":"b18c865c-2ed4-4c41-b116-f97d6f1ef470","Type":"ContainerStarted","Data":"5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81"} Dec 10 15:15:54 crc kubenswrapper[4727]: I1210 15:15:54.268738 4727 generic.go:334] "Generic (PLEG): container finished" podID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerID="5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81" exitCode=0 Dec 10 15:15:54 crc kubenswrapper[4727]: I1210 15:15:54.268782 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw9lc" event={"ID":"b18c865c-2ed4-4c41-b116-f97d6f1ef470","Type":"ContainerDied","Data":"5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81"} Dec 10 15:15:54 crc kubenswrapper[4727]: E1210 15:15:54.598384 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:15:55 crc kubenswrapper[4727]: I1210 15:15:55.281065 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw9lc" event={"ID":"b18c865c-2ed4-4c41-b116-f97d6f1ef470","Type":"ContainerStarted","Data":"83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b"} Dec 10 15:15:55 crc kubenswrapper[4727]: I1210 15:15:55.310356 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vw9lc" podStartSLOduration=2.603303637 podStartE2EDuration="6.310329663s" podCreationTimestamp="2025-12-10 15:15:49 +0000 UTC" firstStartedPulling="2025-12-10 15:15:51.232183757 +0000 UTC m=+2655.426958299" lastFinishedPulling="2025-12-10 15:15:54.939209783 +0000 UTC m=+2659.133984325" observedRunningTime="2025-12-10 15:15:55.300047706 +0000 UTC m=+2659.494822258" watchObservedRunningTime="2025-12-10 15:15:55.310329663 +0000 UTC m=+2659.505104205" Dec 10 15:16:00 crc kubenswrapper[4727]: I1210 15:16:00.068556 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:16:00 crc kubenswrapper[4727]: I1210 15:16:00.068947 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:16:00 crc kubenswrapper[4727]: I1210 15:16:00.147227 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:16:00 crc kubenswrapper[4727]: I1210 15:16:00.384183 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:16:00 crc kubenswrapper[4727]: I1210 15:16:00.448994 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vw9lc"] Dec 10 15:16:02 crc kubenswrapper[4727]: I1210 15:16:02.346981 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vw9lc" podUID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerName="registry-server" containerID="cri-o://83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b" gracePeriod=2 Dec 10 15:16:02 crc kubenswrapper[4727]: I1210 15:16:02.942919 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.022490 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-utilities\") pod \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.023079 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmppl\" (UniqueName: \"kubernetes.io/projected/b18c865c-2ed4-4c41-b116-f97d6f1ef470-kube-api-access-dmppl\") pod \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.023101 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-catalog-content\") pod \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\" (UID: \"b18c865c-2ed4-4c41-b116-f97d6f1ef470\") " Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.027741 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-utilities" (OuterVolumeSpecName: "utilities") pod "b18c865c-2ed4-4c41-b116-f97d6f1ef470" (UID: "b18c865c-2ed4-4c41-b116-f97d6f1ef470"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.054486 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18c865c-2ed4-4c41-b116-f97d6f1ef470-kube-api-access-dmppl" (OuterVolumeSpecName: "kube-api-access-dmppl") pod "b18c865c-2ed4-4c41-b116-f97d6f1ef470" (UID: "b18c865c-2ed4-4c41-b116-f97d6f1ef470"). InnerVolumeSpecName "kube-api-access-dmppl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.086325 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b18c865c-2ed4-4c41-b116-f97d6f1ef470" (UID: "b18c865c-2ed4-4c41-b116-f97d6f1ef470"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.125468 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.125500 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmppl\" (UniqueName: \"kubernetes.io/projected/b18c865c-2ed4-4c41-b116-f97d6f1ef470-kube-api-access-dmppl\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.125511 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b18c865c-2ed4-4c41-b116-f97d6f1ef470-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.360419 4727 generic.go:334] "Generic (PLEG): container finished" podID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerID="83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b" exitCode=0 Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.360466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw9lc" event={"ID":"b18c865c-2ed4-4c41-b116-f97d6f1ef470","Type":"ContainerDied","Data":"83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b"} Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.360493 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw9lc" event={"ID":"b18c865c-2ed4-4c41-b116-f97d6f1ef470","Type":"ContainerDied","Data":"4424affe82ca817ef97197ec396598064900f0c6be4b5a92daeec1e5ff0c1235"} Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.360511 4727 scope.go:117] "RemoveContainer" containerID="83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.360518 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vw9lc" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.396078 4727 scope.go:117] "RemoveContainer" containerID="5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.404028 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vw9lc"] Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.426794 4727 scope.go:117] "RemoveContainer" containerID="4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.433503 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vw9lc"] Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.477356 4727 scope.go:117] "RemoveContainer" containerID="83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b" Dec 10 15:16:03 crc kubenswrapper[4727]: E1210 15:16:03.478020 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b\": container with ID starting with 83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b not found: ID does not exist" containerID="83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.478076 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b"} err="failed to get container status \"83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b\": rpc error: code = NotFound desc = could not find container \"83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b\": container with ID starting with 83d4419200f28b74546e3e021691bfe940c7f37594481ec363f7ba2bfa70f76b not found: ID does not exist" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.478115 4727 scope.go:117] "RemoveContainer" containerID="5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81" Dec 10 15:16:03 crc kubenswrapper[4727]: E1210 15:16:03.478832 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81\": container with ID starting with 5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81 not found: ID does not exist" containerID="5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.478883 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81"} err="failed to get container status \"5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81\": rpc error: code = NotFound desc = could not find container \"5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81\": container with ID starting with 5d1bfe73f73972678d9278f9ce5ef996126cb6f753fdfd5a3cf23aa849193d81 not found: ID does not exist" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.478929 4727 scope.go:117] "RemoveContainer" containerID="4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3" Dec 10 15:16:03 crc kubenswrapper[4727]: E1210 15:16:03.479372 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3\": container with ID starting with 4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3 not found: ID does not exist" containerID="4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3" Dec 10 15:16:03 crc kubenswrapper[4727]: I1210 15:16:03.479407 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3"} err="failed to get container status \"4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3\": rpc error: code = NotFound desc = could not find container \"4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3\": container with ID starting with 4d721009cdbe1d523e27ef86a1c3db4360729f9d5c6f13a6c01852a891e378f3 not found: ID does not exist" Dec 10 15:16:03 crc kubenswrapper[4727]: E1210 15:16:03.565067 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:16:04 crc kubenswrapper[4727]: I1210 15:16:04.579706 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" path="/var/lib/kubelet/pods/b18c865c-2ed4-4c41-b116-f97d6f1ef470/volumes" Dec 10 15:16:06 crc kubenswrapper[4727]: I1210 15:16:06.576488 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:16:06 crc kubenswrapper[4727]: E1210 15:16:06.577106 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:16:08 crc kubenswrapper[4727]: E1210 15:16:08.565334 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:16:17 crc kubenswrapper[4727]: E1210 15:16:17.566358 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:16:18 crc kubenswrapper[4727]: I1210 15:16:18.564624 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:16:18 crc kubenswrapper[4727]: E1210 15:16:18.565208 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:16:21 crc kubenswrapper[4727]: E1210 15:16:21.566291 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:16:32 crc kubenswrapper[4727]: I1210 15:16:32.566628 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:16:32 crc kubenswrapper[4727]: E1210 15:16:32.567605 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:16:32 crc kubenswrapper[4727]: E1210 15:16:32.567637 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:16:32 crc kubenswrapper[4727]: E1210 15:16:32.567685 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:16:43 crc kubenswrapper[4727]: E1210 15:16:43.565650 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:16:44 crc kubenswrapper[4727]: I1210 15:16:44.563597 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:16:44 crc kubenswrapper[4727]: E1210 15:16:44.563894 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:16:44 crc kubenswrapper[4727]: E1210 15:16:44.564680 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:16:54 crc kubenswrapper[4727]: E1210 15:16:54.565455 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:16:56 crc kubenswrapper[4727]: I1210 15:16:56.570978 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:16:56 crc kubenswrapper[4727]: E1210 15:16:56.571868 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:16:59 crc kubenswrapper[4727]: E1210 15:16:59.566348 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:17:05 crc kubenswrapper[4727]: E1210 15:17:05.567449 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:17:10 crc kubenswrapper[4727]: I1210 15:17:10.563519 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:17:10 crc kubenswrapper[4727]: E1210 15:17:10.564462 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:17:11 crc kubenswrapper[4727]: E1210 15:17:11.565824 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:17:18 crc kubenswrapper[4727]: E1210 15:17:18.565434 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:17:23 crc kubenswrapper[4727]: E1210 15:17:23.565228 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:17:25 crc kubenswrapper[4727]: I1210 15:17:25.563698 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:17:25 crc kubenswrapper[4727]: E1210 15:17:25.564589 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:17:30 crc kubenswrapper[4727]: E1210 15:17:30.565579 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:17:38 crc kubenswrapper[4727]: E1210 15:17:38.566941 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:17:40 crc kubenswrapper[4727]: I1210 15:17:40.563216 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:17:41 crc kubenswrapper[4727]: I1210 15:17:41.306487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"6195be96b514419e18bf6ce7d1ba910b002fff9465fc1e1d0763a30d95f493fe"} Dec 10 15:17:43 crc kubenswrapper[4727]: E1210 15:17:43.565690 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:17:53 crc kubenswrapper[4727]: I1210 15:17:53.566002 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:17:53 crc kubenswrapper[4727]: E1210 15:17:53.688657 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:17:53 crc kubenswrapper[4727]: E1210 15:17:53.688742 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:17:53 crc kubenswrapper[4727]: E1210 15:17:53.688967 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:17:53 crc kubenswrapper[4727]: E1210 15:17:53.690188 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:17:56 crc kubenswrapper[4727]: E1210 15:17:56.573647 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.260872 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrghq"] Dec 10 15:18:04 crc kubenswrapper[4727]: E1210 15:18:04.261843 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerName="extract-utilities" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.261859 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerName="extract-utilities" Dec 10 15:18:04 crc kubenswrapper[4727]: E1210 15:18:04.261887 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerName="registry-server" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.261895 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerName="registry-server" Dec 10 15:18:04 crc kubenswrapper[4727]: E1210 15:18:04.261937 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerName="extract-content" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.261946 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerName="extract-content" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.262181 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18c865c-2ed4-4c41-b116-f97d6f1ef470" containerName="registry-server" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.263934 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.291854 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrghq"] Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.372171 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-utilities\") pod \"redhat-operators-jrghq\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.372477 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-catalog-content\") pod \"redhat-operators-jrghq\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.372585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v979m\" (UniqueName: \"kubernetes.io/projected/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-kube-api-access-v979m\") pod \"redhat-operators-jrghq\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.474964 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-utilities\") pod \"redhat-operators-jrghq\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.475016 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-catalog-content\") pod \"redhat-operators-jrghq\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.475037 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v979m\" (UniqueName: \"kubernetes.io/projected/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-kube-api-access-v979m\") pod \"redhat-operators-jrghq\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.475626 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-utilities\") pod \"redhat-operators-jrghq\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.475707 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-catalog-content\") pod \"redhat-operators-jrghq\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.497978 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v979m\" (UniqueName: \"kubernetes.io/projected/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-kube-api-access-v979m\") pod \"redhat-operators-jrghq\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:04 crc kubenswrapper[4727]: I1210 15:18:04.588071 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:05 crc kubenswrapper[4727]: I1210 15:18:05.158233 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrghq"] Dec 10 15:18:05 crc kubenswrapper[4727]: E1210 15:18:05.567057 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:18:05 crc kubenswrapper[4727]: I1210 15:18:05.575450 4727 generic.go:334] "Generic (PLEG): container finished" podID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerID="7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c" exitCode=0 Dec 10 15:18:05 crc kubenswrapper[4727]: I1210 15:18:05.575506 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrghq" event={"ID":"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74","Type":"ContainerDied","Data":"7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c"} Dec 10 15:18:05 crc kubenswrapper[4727]: I1210 15:18:05.575557 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrghq" event={"ID":"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74","Type":"ContainerStarted","Data":"535ffbf7773c0a36396230c29fca6fbbed3af82e7eca0937b8686f9ec8faee0c"} Dec 10 15:18:06 crc kubenswrapper[4727]: I1210 15:18:06.591694 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrghq" event={"ID":"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74","Type":"ContainerStarted","Data":"6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee"} Dec 10 15:18:08 crc kubenswrapper[4727]: E1210 15:18:08.565923 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:18:10 crc kubenswrapper[4727]: I1210 15:18:10.633000 4727 generic.go:334] "Generic (PLEG): container finished" podID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerID="6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee" exitCode=0 Dec 10 15:18:10 crc kubenswrapper[4727]: I1210 15:18:10.633089 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrghq" event={"ID":"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74","Type":"ContainerDied","Data":"6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee"} Dec 10 15:18:11 crc kubenswrapper[4727]: I1210 15:18:11.646098 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrghq" event={"ID":"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74","Type":"ContainerStarted","Data":"d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7"} Dec 10 15:18:11 crc kubenswrapper[4727]: I1210 15:18:11.664767 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrghq" podStartSLOduration=2.191071291 podStartE2EDuration="7.664749326s" podCreationTimestamp="2025-12-10 15:18:04 +0000 UTC" firstStartedPulling="2025-12-10 15:18:05.578586278 +0000 UTC m=+2789.773360820" lastFinishedPulling="2025-12-10 15:18:11.052264323 +0000 UTC m=+2795.247038855" observedRunningTime="2025-12-10 15:18:11.66250512 +0000 UTC m=+2795.857279662" watchObservedRunningTime="2025-12-10 15:18:11.664749326 +0000 UTC m=+2795.859523868" Dec 10 15:18:14 crc kubenswrapper[4727]: I1210 15:18:14.589027 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:14 crc kubenswrapper[4727]: I1210 15:18:14.589771 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:15 crc kubenswrapper[4727]: I1210 15:18:15.638776 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jrghq" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerName="registry-server" probeResult="failure" output=< Dec 10 15:18:15 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 15:18:15 crc kubenswrapper[4727]: > Dec 10 15:18:19 crc kubenswrapper[4727]: E1210 15:18:19.566899 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:18:22 crc kubenswrapper[4727]: E1210 15:18:22.566056 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:18:24 crc kubenswrapper[4727]: I1210 15:18:24.640861 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:24 crc kubenswrapper[4727]: I1210 15:18:24.692378 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:24 crc kubenswrapper[4727]: I1210 15:18:24.902109 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrghq"] Dec 10 15:18:25 crc kubenswrapper[4727]: I1210 15:18:25.776892 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jrghq" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerName="registry-server" containerID="cri-o://d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7" gracePeriod=2 Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.699720 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.791358 4727 generic.go:334] "Generic (PLEG): container finished" podID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerID="d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7" exitCode=0 Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.791404 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrghq" event={"ID":"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74","Type":"ContainerDied","Data":"d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7"} Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.791435 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrghq" event={"ID":"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74","Type":"ContainerDied","Data":"535ffbf7773c0a36396230c29fca6fbbed3af82e7eca0937b8686f9ec8faee0c"} Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.791454 4727 scope.go:117] "RemoveContainer" containerID="d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.791466 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrghq" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.816677 4727 scope.go:117] "RemoveContainer" containerID="6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.839125 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-utilities\") pod \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.839457 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v979m\" (UniqueName: \"kubernetes.io/projected/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-kube-api-access-v979m\") pod \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.839561 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-catalog-content\") pod \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\" (UID: \"afeff7b6-bcd1-4b15-b6e9-3b138be3cb74\") " Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.840010 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-utilities" (OuterVolumeSpecName: "utilities") pod "afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" (UID: "afeff7b6-bcd1-4b15-b6e9-3b138be3cb74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.840137 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.843319 4727 scope.go:117] "RemoveContainer" containerID="7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.846583 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-kube-api-access-v979m" (OuterVolumeSpecName: "kube-api-access-v979m") pod "afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" (UID: "afeff7b6-bcd1-4b15-b6e9-3b138be3cb74"). InnerVolumeSpecName "kube-api-access-v979m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.941979 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v979m\" (UniqueName: \"kubernetes.io/projected/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-kube-api-access-v979m\") on node \"crc\" DevicePath \"\"" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.945348 4727 scope.go:117] "RemoveContainer" containerID="d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7" Dec 10 15:18:26 crc kubenswrapper[4727]: E1210 15:18:26.945864 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7\": container with ID starting with d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7 not found: ID does not exist" containerID="d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.945933 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7"} err="failed to get container status \"d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7\": rpc error: code = NotFound desc = could not find container \"d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7\": container with ID starting with d78db0a9e566e011e6249c53699e8a1d26fe50cce54a118ce983764d80b82bc7 not found: ID does not exist" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.945962 4727 scope.go:117] "RemoveContainer" containerID="6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee" Dec 10 15:18:26 crc kubenswrapper[4727]: E1210 15:18:26.948867 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee\": container with ID starting with 6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee not found: ID does not exist" containerID="6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.948917 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee"} err="failed to get container status \"6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee\": rpc error: code = NotFound desc = could not find container \"6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee\": container with ID starting with 6f40874188b5636676b4b7e12a9c9b4e82cb15e610c81511493d17ae9326e0ee not found: ID does not exist" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.948943 4727 scope.go:117] "RemoveContainer" containerID="7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c" Dec 10 15:18:26 crc kubenswrapper[4727]: E1210 15:18:26.949131 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c\": container with ID starting with 7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c not found: ID does not exist" containerID="7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.949150 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c"} err="failed to get container status \"7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c\": rpc error: code = NotFound desc = could not find container \"7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c\": container with ID starting with 7f68da99fc63f51ccf601d5527dfb62166ac37f4941c5a952f76539866f0e60c not found: ID does not exist" Dec 10 15:18:26 crc kubenswrapper[4727]: I1210 15:18:26.968500 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" (UID: "afeff7b6-bcd1-4b15-b6e9-3b138be3cb74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:18:27 crc kubenswrapper[4727]: I1210 15:18:27.043976 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:18:27 crc kubenswrapper[4727]: I1210 15:18:27.145414 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrghq"] Dec 10 15:18:27 crc kubenswrapper[4727]: I1210 15:18:27.159731 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jrghq"] Dec 10 15:18:28 crc kubenswrapper[4727]: I1210 15:18:28.575481 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" path="/var/lib/kubelet/pods/afeff7b6-bcd1-4b15-b6e9-3b138be3cb74/volumes" Dec 10 15:18:30 crc kubenswrapper[4727]: E1210 15:18:30.565222 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:18:35 crc kubenswrapper[4727]: E1210 15:18:35.687894 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:18:35 crc kubenswrapper[4727]: E1210 15:18:35.688481 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:18:35 crc kubenswrapper[4727]: E1210 15:18:35.688617 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:18:35 crc kubenswrapper[4727]: E1210 15:18:35.689794 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:18:44 crc kubenswrapper[4727]: E1210 15:18:44.566226 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:18:49 crc kubenswrapper[4727]: E1210 15:18:49.565734 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:18:59 crc kubenswrapper[4727]: E1210 15:18:59.565663 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:19:03 crc kubenswrapper[4727]: E1210 15:19:03.566448 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:19:13 crc kubenswrapper[4727]: E1210 15:19:13.565662 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:19:15 crc kubenswrapper[4727]: E1210 15:19:15.564552 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:19:26 crc kubenswrapper[4727]: E1210 15:19:26.572431 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:19:28 crc kubenswrapper[4727]: E1210 15:19:28.565922 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:19:39 crc kubenswrapper[4727]: E1210 15:19:39.565602 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:19:43 crc kubenswrapper[4727]: E1210 15:19:43.566031 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:19:52 crc kubenswrapper[4727]: E1210 15:19:52.567090 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:19:56 crc kubenswrapper[4727]: E1210 15:19:56.572872 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:20:03 crc kubenswrapper[4727]: E1210 15:20:03.565426 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.132202 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4x794"] Dec 10 15:20:07 crc kubenswrapper[4727]: E1210 15:20:07.133214 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerName="extract-utilities" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.133231 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerName="extract-utilities" Dec 10 15:20:07 crc kubenswrapper[4727]: E1210 15:20:07.133272 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerName="extract-content" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.133282 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerName="extract-content" Dec 10 15:20:07 crc kubenswrapper[4727]: E1210 15:20:07.133298 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerName="registry-server" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.133307 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerName="registry-server" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.133550 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeff7b6-bcd1-4b15-b6e9-3b138be3cb74" containerName="registry-server" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.135766 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.149151 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x794"] Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.237935 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-catalog-content\") pod \"redhat-marketplace-4x794\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.238286 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-utilities\") pod \"redhat-marketplace-4x794\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.238354 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xcg5\" (UniqueName: \"kubernetes.io/projected/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-kube-api-access-6xcg5\") pod \"redhat-marketplace-4x794\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.340443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-catalog-content\") pod \"redhat-marketplace-4x794\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.341092 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-catalog-content\") pod \"redhat-marketplace-4x794\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.341334 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-utilities\") pod \"redhat-marketplace-4x794\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.341512 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xcg5\" (UniqueName: \"kubernetes.io/projected/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-kube-api-access-6xcg5\") pod \"redhat-marketplace-4x794\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.341667 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-utilities\") pod \"redhat-marketplace-4x794\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.373241 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xcg5\" (UniqueName: \"kubernetes.io/projected/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-kube-api-access-6xcg5\") pod \"redhat-marketplace-4x794\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.467234 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.723590 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.724004 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:20:07 crc kubenswrapper[4727]: W1210 15:20:07.972070 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod372cfffe_14e8_4e6b_88c1_2bd233cffa1f.slice/crio-aa892606851135ab57c98436bf64420c0a22b3e39c837732000c5cd75a8e56b4 WatchSource:0}: Error finding container aa892606851135ab57c98436bf64420c0a22b3e39c837732000c5cd75a8e56b4: Status 404 returned error can't find the container with id aa892606851135ab57c98436bf64420c0a22b3e39c837732000c5cd75a8e56b4 Dec 10 15:20:07 crc kubenswrapper[4727]: I1210 15:20:07.973829 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x794"] Dec 10 15:20:08 crc kubenswrapper[4727]: E1210 15:20:08.565655 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:20:08 crc kubenswrapper[4727]: I1210 15:20:08.862031 4727 generic.go:334] "Generic (PLEG): container finished" podID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerID="3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1" exitCode=0 Dec 10 15:20:08 crc kubenswrapper[4727]: I1210 15:20:08.862116 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x794" event={"ID":"372cfffe-14e8-4e6b-88c1-2bd233cffa1f","Type":"ContainerDied","Data":"3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1"} Dec 10 15:20:08 crc kubenswrapper[4727]: I1210 15:20:08.862422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x794" event={"ID":"372cfffe-14e8-4e6b-88c1-2bd233cffa1f","Type":"ContainerStarted","Data":"aa892606851135ab57c98436bf64420c0a22b3e39c837732000c5cd75a8e56b4"} Dec 10 15:20:10 crc kubenswrapper[4727]: I1210 15:20:10.968250 4727 generic.go:334] "Generic (PLEG): container finished" podID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerID="f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82" exitCode=0 Dec 10 15:20:10 crc kubenswrapper[4727]: I1210 15:20:10.968345 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x794" event={"ID":"372cfffe-14e8-4e6b-88c1-2bd233cffa1f","Type":"ContainerDied","Data":"f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82"} Dec 10 15:20:11 crc kubenswrapper[4727]: I1210 15:20:11.988342 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x794" event={"ID":"372cfffe-14e8-4e6b-88c1-2bd233cffa1f","Type":"ContainerStarted","Data":"eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c"} Dec 10 15:20:12 crc kubenswrapper[4727]: I1210 15:20:12.028888 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4x794" podStartSLOduration=2.44939581 podStartE2EDuration="5.028868853s" podCreationTimestamp="2025-12-10 15:20:07 +0000 UTC" firstStartedPulling="2025-12-10 15:20:08.86421207 +0000 UTC m=+2913.058986612" lastFinishedPulling="2025-12-10 15:20:11.443685103 +0000 UTC m=+2915.638459655" observedRunningTime="2025-12-10 15:20:12.026825312 +0000 UTC m=+2916.221599854" watchObservedRunningTime="2025-12-10 15:20:12.028868853 +0000 UTC m=+2916.223643395" Dec 10 15:20:15 crc kubenswrapper[4727]: E1210 15:20:15.565567 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:20:17 crc kubenswrapper[4727]: I1210 15:20:17.467888 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:17 crc kubenswrapper[4727]: I1210 15:20:17.468330 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:17 crc kubenswrapper[4727]: I1210 15:20:17.529734 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:18 crc kubenswrapper[4727]: I1210 15:20:18.135697 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:18 crc kubenswrapper[4727]: I1210 15:20:18.183364 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x794"] Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.104192 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4x794" podUID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerName="registry-server" containerID="cri-o://eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c" gracePeriod=2 Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.600602 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.701062 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-catalog-content\") pod \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.701249 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xcg5\" (UniqueName: \"kubernetes.io/projected/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-kube-api-access-6xcg5\") pod \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.701397 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-utilities\") pod \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\" (UID: \"372cfffe-14e8-4e6b-88c1-2bd233cffa1f\") " Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.702339 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-utilities" (OuterVolumeSpecName: "utilities") pod "372cfffe-14e8-4e6b-88c1-2bd233cffa1f" (UID: "372cfffe-14e8-4e6b-88c1-2bd233cffa1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.707716 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-kube-api-access-6xcg5" (OuterVolumeSpecName: "kube-api-access-6xcg5") pod "372cfffe-14e8-4e6b-88c1-2bd233cffa1f" (UID: "372cfffe-14e8-4e6b-88c1-2bd233cffa1f"). InnerVolumeSpecName "kube-api-access-6xcg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.727662 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "372cfffe-14e8-4e6b-88c1-2bd233cffa1f" (UID: "372cfffe-14e8-4e6b-88c1-2bd233cffa1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.804797 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.804863 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xcg5\" (UniqueName: \"kubernetes.io/projected/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-kube-api-access-6xcg5\") on node \"crc\" DevicePath \"\"" Dec 10 15:20:20 crc kubenswrapper[4727]: I1210 15:20:20.804887 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372cfffe-14e8-4e6b-88c1-2bd233cffa1f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.122237 4727 generic.go:334] "Generic (PLEG): container finished" podID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerID="eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c" exitCode=0 Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.122344 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4x794" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.122334 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x794" event={"ID":"372cfffe-14e8-4e6b-88c1-2bd233cffa1f","Type":"ContainerDied","Data":"eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c"} Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.122433 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x794" event={"ID":"372cfffe-14e8-4e6b-88c1-2bd233cffa1f","Type":"ContainerDied","Data":"aa892606851135ab57c98436bf64420c0a22b3e39c837732000c5cd75a8e56b4"} Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.122458 4727 scope.go:117] "RemoveContainer" containerID="eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.164783 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x794"] Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.170060 4727 scope.go:117] "RemoveContainer" containerID="f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.178425 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x794"] Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.195575 4727 scope.go:117] "RemoveContainer" containerID="3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.257314 4727 scope.go:117] "RemoveContainer" containerID="eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c" Dec 10 15:20:21 crc kubenswrapper[4727]: E1210 15:20:21.258411 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c\": container with ID starting with eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c not found: ID does not exist" containerID="eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.258477 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c"} err="failed to get container status \"eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c\": rpc error: code = NotFound desc = could not find container \"eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c\": container with ID starting with eb10300fc441c4dbe79ed544565aa922336c6138b26586b0cc252a98906c7c4c not found: ID does not exist" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.258511 4727 scope.go:117] "RemoveContainer" containerID="f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82" Dec 10 15:20:21 crc kubenswrapper[4727]: E1210 15:20:21.258958 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82\": container with ID starting with f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82 not found: ID does not exist" containerID="f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.259075 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82"} err="failed to get container status \"f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82\": rpc error: code = NotFound desc = could not find container \"f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82\": container with ID starting with f0bd0443f909cc1d807b5cefd9d85ad3336c40a433efb23395c7bc71ed1d3c82 not found: ID does not exist" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.259165 4727 scope.go:117] "RemoveContainer" containerID="3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1" Dec 10 15:20:21 crc kubenswrapper[4727]: E1210 15:20:21.260059 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1\": container with ID starting with 3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1 not found: ID does not exist" containerID="3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1" Dec 10 15:20:21 crc kubenswrapper[4727]: I1210 15:20:21.260102 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1"} err="failed to get container status \"3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1\": rpc error: code = NotFound desc = could not find container \"3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1\": container with ID starting with 3b47a372334439c448d3b66a8a58a51481a0b23aa1b8c969e688f7d4ddb4eae1 not found: ID does not exist" Dec 10 15:20:22 crc kubenswrapper[4727]: E1210 15:20:22.566169 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:20:22 crc kubenswrapper[4727]: I1210 15:20:22.578040 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" path="/var/lib/kubelet/pods/372cfffe-14e8-4e6b-88c1-2bd233cffa1f/volumes" Dec 10 15:20:30 crc kubenswrapper[4727]: E1210 15:20:30.565503 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:20:36 crc kubenswrapper[4727]: E1210 15:20:36.568081 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:20:37 crc kubenswrapper[4727]: I1210 15:20:37.724070 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:20:37 crc kubenswrapper[4727]: I1210 15:20:37.725793 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:20:43 crc kubenswrapper[4727]: E1210 15:20:43.566546 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:20:47 crc kubenswrapper[4727]: E1210 15:20:47.567010 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:20:54 crc kubenswrapper[4727]: E1210 15:20:54.565136 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:20:59 crc kubenswrapper[4727]: I1210 15:20:59.506546 4727 generic.go:334] "Generic (PLEG): container finished" podID="08f2ae25-5b39-416e-9f25-830650cc91d0" containerID="eff88eaed795c8b287d8a8e53cfd0bb594c960a1263d4344199892e0b8745046" exitCode=2 Dec 10 15:20:59 crc kubenswrapper[4727]: I1210 15:20:59.506630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" event={"ID":"08f2ae25-5b39-416e-9f25-830650cc91d0","Type":"ContainerDied","Data":"eff88eaed795c8b287d8a8e53cfd0bb594c960a1263d4344199892e0b8745046"} Dec 10 15:21:00 crc kubenswrapper[4727]: E1210 15:21:00.564937 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.041821 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.100419 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-ssh-key\") pod \"08f2ae25-5b39-416e-9f25-830650cc91d0\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.100701 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-inventory\") pod \"08f2ae25-5b39-416e-9f25-830650cc91d0\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.100757 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbkkh\" (UniqueName: \"kubernetes.io/projected/08f2ae25-5b39-416e-9f25-830650cc91d0-kube-api-access-xbkkh\") pod \"08f2ae25-5b39-416e-9f25-830650cc91d0\" (UID: \"08f2ae25-5b39-416e-9f25-830650cc91d0\") " Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.116652 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f2ae25-5b39-416e-9f25-830650cc91d0-kube-api-access-xbkkh" (OuterVolumeSpecName: "kube-api-access-xbkkh") pod "08f2ae25-5b39-416e-9f25-830650cc91d0" (UID: "08f2ae25-5b39-416e-9f25-830650cc91d0"). InnerVolumeSpecName "kube-api-access-xbkkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.131802 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "08f2ae25-5b39-416e-9f25-830650cc91d0" (UID: "08f2ae25-5b39-416e-9f25-830650cc91d0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.132294 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-inventory" (OuterVolumeSpecName: "inventory") pod "08f2ae25-5b39-416e-9f25-830650cc91d0" (UID: "08f2ae25-5b39-416e-9f25-830650cc91d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.203733 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.203766 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08f2ae25-5b39-416e-9f25-830650cc91d0-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.203776 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbkkh\" (UniqueName: \"kubernetes.io/projected/08f2ae25-5b39-416e-9f25-830650cc91d0-kube-api-access-xbkkh\") on node \"crc\" DevicePath \"\"" Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.527158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" event={"ID":"08f2ae25-5b39-416e-9f25-830650cc91d0","Type":"ContainerDied","Data":"e052c6b727e48186429ab6fb2ec8d4d1eeebccdf7b878cfd51a6a37b2577a43f"} Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.527203 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e052c6b727e48186429ab6fb2ec8d4d1eeebccdf7b878cfd51a6a37b2577a43f" Dec 10 15:21:01 crc kubenswrapper[4727]: I1210 15:21:01.527220 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b76mr" Dec 10 15:21:05 crc kubenswrapper[4727]: E1210 15:21:05.565375 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:21:07 crc kubenswrapper[4727]: I1210 15:21:07.723762 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:21:07 crc kubenswrapper[4727]: I1210 15:21:07.724122 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:21:07 crc kubenswrapper[4727]: I1210 15:21:07.724180 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:21:07 crc kubenswrapper[4727]: I1210 15:21:07.725093 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6195be96b514419e18bf6ce7d1ba910b002fff9465fc1e1d0763a30d95f493fe"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:21:07 crc kubenswrapper[4727]: I1210 15:21:07.725174 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://6195be96b514419e18bf6ce7d1ba910b002fff9465fc1e1d0763a30d95f493fe" gracePeriod=600 Dec 10 15:21:08 crc kubenswrapper[4727]: I1210 15:21:08.630176 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="6195be96b514419e18bf6ce7d1ba910b002fff9465fc1e1d0763a30d95f493fe" exitCode=0 Dec 10 15:21:08 crc kubenswrapper[4727]: I1210 15:21:08.630731 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"6195be96b514419e18bf6ce7d1ba910b002fff9465fc1e1d0763a30d95f493fe"} Dec 10 15:21:08 crc kubenswrapper[4727]: I1210 15:21:08.630764 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3"} Dec 10 15:21:08 crc kubenswrapper[4727]: I1210 15:21:08.630782 4727 scope.go:117] "RemoveContainer" containerID="5ce1e45226ef307657c4b6eb9de59952916ec08ba2b2bd28096b50bbf8762750" Dec 10 15:21:11 crc kubenswrapper[4727]: E1210 15:21:11.567664 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.029272 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5"] Dec 10 15:21:19 crc kubenswrapper[4727]: E1210 15:21:19.030245 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerName="extract-content" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.030260 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerName="extract-content" Dec 10 15:21:19 crc kubenswrapper[4727]: E1210 15:21:19.030289 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerName="extract-utilities" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.030324 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerName="extract-utilities" Dec 10 15:21:19 crc kubenswrapper[4727]: E1210 15:21:19.030344 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerName="registry-server" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.030352 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerName="registry-server" Dec 10 15:21:19 crc kubenswrapper[4727]: E1210 15:21:19.030360 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f2ae25-5b39-416e-9f25-830650cc91d0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.030367 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f2ae25-5b39-416e-9f25-830650cc91d0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.030579 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="372cfffe-14e8-4e6b-88c1-2bd233cffa1f" containerName="registry-server" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.030606 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f2ae25-5b39-416e-9f25-830650cc91d0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.031477 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.033718 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.033982 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.034065 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.035485 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.047972 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5"] Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.114842 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.114900 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4j62\" (UniqueName: \"kubernetes.io/projected/571070d9-6ff3-4477-b6dd-567afc8be7e1-kube-api-access-j4j62\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.115096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.217404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.217467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4j62\" (UniqueName: \"kubernetes.io/projected/571070d9-6ff3-4477-b6dd-567afc8be7e1-kube-api-access-j4j62\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.217542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.224121 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.227469 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.236748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4j62\" (UniqueName: \"kubernetes.io/projected/571070d9-6ff3-4477-b6dd-567afc8be7e1-kube-api-access-j4j62\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.366971 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:21:19 crc kubenswrapper[4727]: I1210 15:21:19.939029 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5"] Dec 10 15:21:20 crc kubenswrapper[4727]: E1210 15:21:20.564834 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:21:20 crc kubenswrapper[4727]: I1210 15:21:20.751595 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" event={"ID":"571070d9-6ff3-4477-b6dd-567afc8be7e1","Type":"ContainerStarted","Data":"685bcdd03404869528cdaf4e42d367c3bf092bbaf5993d9e39b44169ed1e10ec"} Dec 10 15:21:21 crc kubenswrapper[4727]: I1210 15:21:21.761586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" event={"ID":"571070d9-6ff3-4477-b6dd-567afc8be7e1","Type":"ContainerStarted","Data":"49ed43472d037a994d5f42d56716655c4b9b15c93721f986c35e3f4b0810e94a"} Dec 10 15:21:21 crc kubenswrapper[4727]: I1210 15:21:21.784347 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" podStartSLOduration=1.221950835 podStartE2EDuration="2.784323352s" podCreationTimestamp="2025-12-10 15:21:19 +0000 UTC" firstStartedPulling="2025-12-10 15:21:19.942155615 +0000 UTC m=+2984.136930157" lastFinishedPulling="2025-12-10 15:21:21.504528122 +0000 UTC m=+2985.699302674" observedRunningTime="2025-12-10 15:21:21.780336102 +0000 UTC m=+2985.975110654" watchObservedRunningTime="2025-12-10 15:21:21.784323352 +0000 UTC m=+2985.979097894" Dec 10 15:21:24 crc kubenswrapper[4727]: E1210 15:21:24.565031 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:21:34 crc kubenswrapper[4727]: E1210 15:21:34.565204 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:21:39 crc kubenswrapper[4727]: E1210 15:21:39.565772 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:21:45 crc kubenswrapper[4727]: E1210 15:21:45.567213 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:21:50 crc kubenswrapper[4727]: E1210 15:21:50.566245 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:22:00 crc kubenswrapper[4727]: E1210 15:22:00.568810 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:22:05 crc kubenswrapper[4727]: E1210 15:22:05.566871 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:22:15 crc kubenswrapper[4727]: E1210 15:22:15.565816 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:22:18 crc kubenswrapper[4727]: E1210 15:22:18.564943 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:22:28 crc kubenswrapper[4727]: E1210 15:22:28.567649 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:22:31 crc kubenswrapper[4727]: E1210 15:22:31.565161 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:22:42 crc kubenswrapper[4727]: E1210 15:22:42.565606 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:22:45 crc kubenswrapper[4727]: E1210 15:22:45.565602 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:22:53 crc kubenswrapper[4727]: E1210 15:22:53.567248 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:23:00 crc kubenswrapper[4727]: I1210 15:23:00.566344 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:23:00 crc kubenswrapper[4727]: E1210 15:23:00.692713 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:23:00 crc kubenswrapper[4727]: E1210 15:23:00.692772 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:23:00 crc kubenswrapper[4727]: E1210 15:23:00.692929 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:23:00 crc kubenswrapper[4727]: E1210 15:23:00.694091 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:23:07 crc kubenswrapper[4727]: E1210 15:23:07.565797 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:23:15 crc kubenswrapper[4727]: E1210 15:23:15.566360 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:23:19 crc kubenswrapper[4727]: E1210 15:23:19.566781 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:23:29 crc kubenswrapper[4727]: E1210 15:23:29.565529 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:23:30 crc kubenswrapper[4727]: E1210 15:23:30.566107 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:23:37 crc kubenswrapper[4727]: I1210 15:23:37.723825 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:23:37 crc kubenswrapper[4727]: I1210 15:23:37.724398 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4727]: E1210 15:23:44.565480 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:23:44 crc kubenswrapper[4727]: E1210 15:23:44.697771 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:23:44 crc kubenswrapper[4727]: E1210 15:23:44.697833 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:23:44 crc kubenswrapper[4727]: E1210 15:23:44.697984 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:23:44 crc kubenswrapper[4727]: E1210 15:23:44.699338 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:23:55 crc kubenswrapper[4727]: E1210 15:23:55.642696 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:23:55 crc kubenswrapper[4727]: E1210 15:23:55.656392 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:24:07 crc kubenswrapper[4727]: E1210 15:24:07.566320 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:24:07 crc kubenswrapper[4727]: E1210 15:24:07.566371 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:24:07 crc kubenswrapper[4727]: I1210 15:24:07.723998 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:24:07 crc kubenswrapper[4727]: I1210 15:24:07.724095 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:24:20 crc kubenswrapper[4727]: E1210 15:24:20.565560 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:24:22 crc kubenswrapper[4727]: E1210 15:24:22.566968 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:24:33 crc kubenswrapper[4727]: E1210 15:24:33.565120 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:24:34 crc kubenswrapper[4727]: E1210 15:24:34.566462 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:24:37 crc kubenswrapper[4727]: I1210 15:24:37.723931 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:24:37 crc kubenswrapper[4727]: I1210 15:24:37.724591 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:24:37 crc kubenswrapper[4727]: I1210 15:24:37.724661 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:24:37 crc kubenswrapper[4727]: I1210 15:24:37.725801 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:24:37 crc kubenswrapper[4727]: I1210 15:24:37.725882 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" gracePeriod=600 Dec 10 15:24:37 crc kubenswrapper[4727]: E1210 15:24:37.855614 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:24:38 crc kubenswrapper[4727]: I1210 15:24:38.134242 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" exitCode=0 Dec 10 15:24:38 crc kubenswrapper[4727]: I1210 15:24:38.134307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3"} Dec 10 15:24:38 crc kubenswrapper[4727]: I1210 15:24:38.134417 4727 scope.go:117] "RemoveContainer" containerID="6195be96b514419e18bf6ce7d1ba910b002fff9465fc1e1d0763a30d95f493fe" Dec 10 15:24:38 crc kubenswrapper[4727]: I1210 15:24:38.135506 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:24:38 crc kubenswrapper[4727]: E1210 15:24:38.135832 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.556942 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zqmqm"] Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.560834 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: E1210 15:24:45.567383 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.579385 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqmqm"] Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.626185 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-catalog-content\") pod \"certified-operators-zqmqm\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.626356 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-utilities\") pod \"certified-operators-zqmqm\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.626978 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg749\" (UniqueName: \"kubernetes.io/projected/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-kube-api-access-fg749\") pod \"certified-operators-zqmqm\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.729373 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-utilities\") pod \"certified-operators-zqmqm\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.729591 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg749\" (UniqueName: \"kubernetes.io/projected/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-kube-api-access-fg749\") pod \"certified-operators-zqmqm\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.729692 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-catalog-content\") pod \"certified-operators-zqmqm\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.730042 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-utilities\") pod \"certified-operators-zqmqm\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.730663 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-catalog-content\") pod \"certified-operators-zqmqm\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.753539 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg749\" (UniqueName: \"kubernetes.io/projected/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-kube-api-access-fg749\") pod \"certified-operators-zqmqm\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:45 crc kubenswrapper[4727]: I1210 15:24:45.889560 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:46 crc kubenswrapper[4727]: W1210 15:24:46.461368 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8bc5d7b_3c76_40d9_8d0e_b1c5e90914fc.slice/crio-6cf43b07e5f2be3ee7c31dd85e4d4eb55f824317d3be35601c9ed6848b64350d WatchSource:0}: Error finding container 6cf43b07e5f2be3ee7c31dd85e4d4eb55f824317d3be35601c9ed6848b64350d: Status 404 returned error can't find the container with id 6cf43b07e5f2be3ee7c31dd85e4d4eb55f824317d3be35601c9ed6848b64350d Dec 10 15:24:46 crc kubenswrapper[4727]: I1210 15:24:46.464800 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqmqm"] Dec 10 15:24:47 crc kubenswrapper[4727]: I1210 15:24:47.239139 4727 generic.go:334] "Generic (PLEG): container finished" podID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerID="31b8f8527dfffafdaa55580570e9daa95a1de417e4d0f80c7c059d06de2b2c76" exitCode=0 Dec 10 15:24:47 crc kubenswrapper[4727]: I1210 15:24:47.239201 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqmqm" event={"ID":"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc","Type":"ContainerDied","Data":"31b8f8527dfffafdaa55580570e9daa95a1de417e4d0f80c7c059d06de2b2c76"} Dec 10 15:24:47 crc kubenswrapper[4727]: I1210 15:24:47.239437 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqmqm" event={"ID":"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc","Type":"ContainerStarted","Data":"6cf43b07e5f2be3ee7c31dd85e4d4eb55f824317d3be35601c9ed6848b64350d"} Dec 10 15:24:47 crc kubenswrapper[4727]: E1210 15:24:47.565143 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:24:48 crc kubenswrapper[4727]: I1210 15:24:48.253892 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqmqm" event={"ID":"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc","Type":"ContainerStarted","Data":"32df27a3f023ecbd9fd045e473a4945c5d8a16fd6617f3e776623c1247f1ada7"} Dec 10 15:24:49 crc kubenswrapper[4727]: I1210 15:24:49.264164 4727 generic.go:334] "Generic (PLEG): container finished" podID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerID="32df27a3f023ecbd9fd045e473a4945c5d8a16fd6617f3e776623c1247f1ada7" exitCode=0 Dec 10 15:24:49 crc kubenswrapper[4727]: I1210 15:24:49.264277 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqmqm" event={"ID":"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc","Type":"ContainerDied","Data":"32df27a3f023ecbd9fd045e473a4945c5d8a16fd6617f3e776623c1247f1ada7"} Dec 10 15:24:50 crc kubenswrapper[4727]: I1210 15:24:50.284174 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqmqm" event={"ID":"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc","Type":"ContainerStarted","Data":"6872c1512e76859ac0e4012ecafd1c5897c96af47b525ef9d6953e1fa215dfdc"} Dec 10 15:24:50 crc kubenswrapper[4727]: I1210 15:24:50.313587 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zqmqm" podStartSLOduration=2.8377043520000003 podStartE2EDuration="5.313553697s" podCreationTimestamp="2025-12-10 15:24:45 +0000 UTC" firstStartedPulling="2025-12-10 15:24:47.24082869 +0000 UTC m=+3191.435603232" lastFinishedPulling="2025-12-10 15:24:49.716678035 +0000 UTC m=+3193.911452577" observedRunningTime="2025-12-10 15:24:50.310229443 +0000 UTC m=+3194.505004005" watchObservedRunningTime="2025-12-10 15:24:50.313553697 +0000 UTC m=+3194.508328239" Dec 10 15:24:50 crc kubenswrapper[4727]: I1210 15:24:50.563352 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:24:50 crc kubenswrapper[4727]: E1210 15:24:50.563765 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:24:55 crc kubenswrapper[4727]: I1210 15:24:55.890173 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:55 crc kubenswrapper[4727]: I1210 15:24:55.890503 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:55 crc kubenswrapper[4727]: I1210 15:24:55.972073 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:56 crc kubenswrapper[4727]: I1210 15:24:56.402791 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:24:56 crc kubenswrapper[4727]: I1210 15:24:56.450304 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zqmqm"] Dec 10 15:24:57 crc kubenswrapper[4727]: E1210 15:24:57.564785 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:24:58 crc kubenswrapper[4727]: I1210 15:24:58.370338 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zqmqm" podUID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerName="registry-server" containerID="cri-o://6872c1512e76859ac0e4012ecafd1c5897c96af47b525ef9d6953e1fa215dfdc" gracePeriod=2 Dec 10 15:24:59 crc kubenswrapper[4727]: I1210 15:24:59.388363 4727 generic.go:334] "Generic (PLEG): container finished" podID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerID="6872c1512e76859ac0e4012ecafd1c5897c96af47b525ef9d6953e1fa215dfdc" exitCode=0 Dec 10 15:24:59 crc kubenswrapper[4727]: I1210 15:24:59.388455 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqmqm" event={"ID":"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc","Type":"ContainerDied","Data":"6872c1512e76859ac0e4012ecafd1c5897c96af47b525ef9d6953e1fa215dfdc"} Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.102498 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.192490 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-utilities\") pod \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.192580 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-catalog-content\") pod \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.192618 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg749\" (UniqueName: \"kubernetes.io/projected/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-kube-api-access-fg749\") pod \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\" (UID: \"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc\") " Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.193459 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-utilities" (OuterVolumeSpecName: "utilities") pod "e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" (UID: "e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.194374 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.199623 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-kube-api-access-fg749" (OuterVolumeSpecName: "kube-api-access-fg749") pod "e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" (UID: "e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc"). InnerVolumeSpecName "kube-api-access-fg749". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.267673 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" (UID: "e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.296318 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.296359 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg749\" (UniqueName: \"kubernetes.io/projected/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc-kube-api-access-fg749\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.400506 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqmqm" event={"ID":"e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc","Type":"ContainerDied","Data":"6cf43b07e5f2be3ee7c31dd85e4d4eb55f824317d3be35601c9ed6848b64350d"} Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.400565 4727 scope.go:117] "RemoveContainer" containerID="6872c1512e76859ac0e4012ecafd1c5897c96af47b525ef9d6953e1fa215dfdc" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.400585 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqmqm" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.438898 4727 scope.go:117] "RemoveContainer" containerID="32df27a3f023ecbd9fd045e473a4945c5d8a16fd6617f3e776623c1247f1ada7" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.442821 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zqmqm"] Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.459315 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zqmqm"] Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.475235 4727 scope.go:117] "RemoveContainer" containerID="31b8f8527dfffafdaa55580570e9daa95a1de417e4d0f80c7c059d06de2b2c76" Dec 10 15:25:00 crc kubenswrapper[4727]: E1210 15:25:00.565008 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:25:00 crc kubenswrapper[4727]: I1210 15:25:00.576956 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" path="/var/lib/kubelet/pods/e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc/volumes" Dec 10 15:25:02 crc kubenswrapper[4727]: I1210 15:25:02.563614 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:25:02 crc kubenswrapper[4727]: E1210 15:25:02.564378 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:25:10 crc kubenswrapper[4727]: E1210 15:25:10.565738 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:25:13 crc kubenswrapper[4727]: I1210 15:25:13.564228 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:25:13 crc kubenswrapper[4727]: E1210 15:25:13.565055 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:25:13 crc kubenswrapper[4727]: E1210 15:25:13.567999 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:25:22 crc kubenswrapper[4727]: E1210 15:25:22.567480 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:25:27 crc kubenswrapper[4727]: E1210 15:25:27.564846 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:25:28 crc kubenswrapper[4727]: I1210 15:25:28.563274 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:25:28 crc kubenswrapper[4727]: E1210 15:25:28.563798 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:25:33 crc kubenswrapper[4727]: E1210 15:25:33.565267 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:25:40 crc kubenswrapper[4727]: E1210 15:25:40.565968 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:25:43 crc kubenswrapper[4727]: I1210 15:25:43.563693 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:25:43 crc kubenswrapper[4727]: E1210 15:25:43.564368 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:25:48 crc kubenswrapper[4727]: E1210 15:25:48.565697 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:25:54 crc kubenswrapper[4727]: E1210 15:25:54.565841 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:25:55 crc kubenswrapper[4727]: I1210 15:25:55.564526 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:25:55 crc kubenswrapper[4727]: E1210 15:25:55.565233 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:26:01 crc kubenswrapper[4727]: E1210 15:26:01.568031 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:26:08 crc kubenswrapper[4727]: E1210 15:26:08.564994 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:26:09 crc kubenswrapper[4727]: I1210 15:26:09.563039 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:26:09 crc kubenswrapper[4727]: E1210 15:26:09.563403 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:26:14 crc kubenswrapper[4727]: E1210 15:26:14.566343 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:26:19 crc kubenswrapper[4727]: E1210 15:26:19.567674 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:26:20 crc kubenswrapper[4727]: I1210 15:26:20.564777 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:26:20 crc kubenswrapper[4727]: E1210 15:26:20.565341 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:26:28 crc kubenswrapper[4727]: E1210 15:26:28.566660 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:26:30 crc kubenswrapper[4727]: E1210 15:26:30.566197 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:26:34 crc kubenswrapper[4727]: I1210 15:26:34.563499 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:26:34 crc kubenswrapper[4727]: E1210 15:26:34.564297 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:26:39 crc kubenswrapper[4727]: E1210 15:26:39.566487 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:26:41 crc kubenswrapper[4727]: E1210 15:26:41.568396 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:26:47 crc kubenswrapper[4727]: I1210 15:26:47.563262 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:26:47 crc kubenswrapper[4727]: E1210 15:26:47.564337 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:26:50 crc kubenswrapper[4727]: E1210 15:26:50.565003 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:26:55 crc kubenswrapper[4727]: E1210 15:26:55.567416 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:27:02 crc kubenswrapper[4727]: I1210 15:27:02.563944 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:27:02 crc kubenswrapper[4727]: E1210 15:27:02.564439 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:27:04 crc kubenswrapper[4727]: E1210 15:27:04.566308 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:27:06 crc kubenswrapper[4727]: E1210 15:27:06.578352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:27:14 crc kubenswrapper[4727]: I1210 15:27:14.563027 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:27:14 crc kubenswrapper[4727]: E1210 15:27:14.563841 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:27:18 crc kubenswrapper[4727]: E1210 15:27:18.566218 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:27:20 crc kubenswrapper[4727]: E1210 15:27:20.566971 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:27:27 crc kubenswrapper[4727]: I1210 15:27:27.563326 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:27:27 crc kubenswrapper[4727]: E1210 15:27:27.564052 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:27:31 crc kubenswrapper[4727]: E1210 15:27:31.564853 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:27:34 crc kubenswrapper[4727]: E1210 15:27:34.564870 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:27:38 crc kubenswrapper[4727]: I1210 15:27:38.134182 4727 generic.go:334] "Generic (PLEG): container finished" podID="571070d9-6ff3-4477-b6dd-567afc8be7e1" containerID="49ed43472d037a994d5f42d56716655c4b9b15c93721f986c35e3f4b0810e94a" exitCode=2 Dec 10 15:27:38 crc kubenswrapper[4727]: I1210 15:27:38.134256 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" event={"ID":"571070d9-6ff3-4477-b6dd-567afc8be7e1","Type":"ContainerDied","Data":"49ed43472d037a994d5f42d56716655c4b9b15c93721f986c35e3f4b0810e94a"} Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.755449 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.815027 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-inventory\") pod \"571070d9-6ff3-4477-b6dd-567afc8be7e1\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.815160 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4j62\" (UniqueName: \"kubernetes.io/projected/571070d9-6ff3-4477-b6dd-567afc8be7e1-kube-api-access-j4j62\") pod \"571070d9-6ff3-4477-b6dd-567afc8be7e1\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.815336 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-ssh-key\") pod \"571070d9-6ff3-4477-b6dd-567afc8be7e1\" (UID: \"571070d9-6ff3-4477-b6dd-567afc8be7e1\") " Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.829025 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/571070d9-6ff3-4477-b6dd-567afc8be7e1-kube-api-access-j4j62" (OuterVolumeSpecName: "kube-api-access-j4j62") pod "571070d9-6ff3-4477-b6dd-567afc8be7e1" (UID: "571070d9-6ff3-4477-b6dd-567afc8be7e1"). InnerVolumeSpecName "kube-api-access-j4j62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.848140 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "571070d9-6ff3-4477-b6dd-567afc8be7e1" (UID: "571070d9-6ff3-4477-b6dd-567afc8be7e1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.851189 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-inventory" (OuterVolumeSpecName: "inventory") pod "571070d9-6ff3-4477-b6dd-567afc8be7e1" (UID: "571070d9-6ff3-4477-b6dd-567afc8be7e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.919563 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4j62\" (UniqueName: \"kubernetes.io/projected/571070d9-6ff3-4477-b6dd-567afc8be7e1-kube-api-access-j4j62\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.919610 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:39 crc kubenswrapper[4727]: I1210 15:27:39.919621 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/571070d9-6ff3-4477-b6dd-567afc8be7e1-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:40 crc kubenswrapper[4727]: I1210 15:27:40.158159 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" event={"ID":"571070d9-6ff3-4477-b6dd-567afc8be7e1","Type":"ContainerDied","Data":"685bcdd03404869528cdaf4e42d367c3bf092bbaf5993d9e39b44169ed1e10ec"} Dec 10 15:27:40 crc kubenswrapper[4727]: I1210 15:27:40.158204 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685bcdd03404869528cdaf4e42d367c3bf092bbaf5993d9e39b44169ed1e10ec" Dec 10 15:27:40 crc kubenswrapper[4727]: I1210 15:27:40.158265 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5" Dec 10 15:27:40 crc kubenswrapper[4727]: I1210 15:27:40.562768 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:27:40 crc kubenswrapper[4727]: E1210 15:27:40.563172 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:27:44 crc kubenswrapper[4727]: E1210 15:27:44.566861 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:27:47 crc kubenswrapper[4727]: E1210 15:27:47.565346 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:27:54 crc kubenswrapper[4727]: I1210 15:27:54.563563 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:27:54 crc kubenswrapper[4727]: E1210 15:27:54.564548 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:27:58 crc kubenswrapper[4727]: E1210 15:27:58.566707 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:28:00 crc kubenswrapper[4727]: E1210 15:28:00.567108 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:28:05 crc kubenswrapper[4727]: I1210 15:28:05.564018 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:28:05 crc kubenswrapper[4727]: E1210 15:28:05.564827 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:28:12 crc kubenswrapper[4727]: I1210 15:28:12.566021 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:28:12 crc kubenswrapper[4727]: E1210 15:28:12.689276 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:28:12 crc kubenswrapper[4727]: E1210 15:28:12.689379 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:28:12 crc kubenswrapper[4727]: E1210 15:28:12.689590 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:28:12 crc kubenswrapper[4727]: E1210 15:28:12.690956 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:28:15 crc kubenswrapper[4727]: E1210 15:28:15.565557 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.039295 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9"] Dec 10 15:28:17 crc kubenswrapper[4727]: E1210 15:28:17.040196 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerName="registry-server" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.040211 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerName="registry-server" Dec 10 15:28:17 crc kubenswrapper[4727]: E1210 15:28:17.040227 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571070d9-6ff3-4477-b6dd-567afc8be7e1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.040234 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="571070d9-6ff3-4477-b6dd-567afc8be7e1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:28:17 crc kubenswrapper[4727]: E1210 15:28:17.040264 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerName="extract-utilities" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.040270 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerName="extract-utilities" Dec 10 15:28:17 crc kubenswrapper[4727]: E1210 15:28:17.040289 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerName="extract-content" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.040295 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerName="extract-content" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.040493 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bc5d7b-3c76-40d9-8d0e-b1c5e90914fc" containerName="registry-server" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.040512 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="571070d9-6ff3-4477-b6dd-567afc8be7e1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.041389 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.044344 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.044365 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.045184 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.045663 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.051033 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9"] Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.117218 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.117291 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.117397 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz5r8\" (UniqueName: \"kubernetes.io/projected/491442b9-eec3-46e4-9b19-998f5fcd72af-kube-api-access-xz5r8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.220038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.220119 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.220205 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz5r8\" (UniqueName: \"kubernetes.io/projected/491442b9-eec3-46e4-9b19-998f5fcd72af-kube-api-access-xz5r8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.228608 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.233592 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.249527 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz5r8\" (UniqueName: \"kubernetes.io/projected/491442b9-eec3-46e4-9b19-998f5fcd72af-kube-api-access-xz5r8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:17 crc kubenswrapper[4727]: I1210 15:28:17.366791 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:28:18 crc kubenswrapper[4727]: I1210 15:28:18.006468 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9"] Dec 10 15:28:18 crc kubenswrapper[4727]: I1210 15:28:18.589651 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" event={"ID":"491442b9-eec3-46e4-9b19-998f5fcd72af","Type":"ContainerStarted","Data":"af28ac4e4f6443a3c642e1ae0904ff7a026253aa97d28a5c800de894ae7de168"} Dec 10 15:28:19 crc kubenswrapper[4727]: I1210 15:28:19.564032 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:28:19 crc kubenswrapper[4727]: E1210 15:28:19.564442 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:28:19 crc kubenswrapper[4727]: I1210 15:28:19.600048 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" event={"ID":"491442b9-eec3-46e4-9b19-998f5fcd72af","Type":"ContainerStarted","Data":"f657eb1d2593dcef52f4d0652cf10c2f9476e9c14450cc778238481459f62439"} Dec 10 15:28:19 crc kubenswrapper[4727]: I1210 15:28:19.647978 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" podStartSLOduration=2.179267314 podStartE2EDuration="2.647954795s" podCreationTimestamp="2025-12-10 15:28:17 +0000 UTC" firstStartedPulling="2025-12-10 15:28:18.01615482 +0000 UTC m=+3402.210929372" lastFinishedPulling="2025-12-10 15:28:18.484842311 +0000 UTC m=+3402.679616853" observedRunningTime="2025-12-10 15:28:19.617532391 +0000 UTC m=+3403.812306943" watchObservedRunningTime="2025-12-10 15:28:19.647954795 +0000 UTC m=+3403.842729337" Dec 10 15:28:25 crc kubenswrapper[4727]: E1210 15:28:25.565554 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:28:30 crc kubenswrapper[4727]: E1210 15:28:30.566943 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:28:31 crc kubenswrapper[4727]: I1210 15:28:31.563084 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:28:31 crc kubenswrapper[4727]: E1210 15:28:31.563393 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:28:40 crc kubenswrapper[4727]: E1210 15:28:40.566543 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:28:42 crc kubenswrapper[4727]: E1210 15:28:42.564607 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:28:46 crc kubenswrapper[4727]: I1210 15:28:46.569941 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:28:46 crc kubenswrapper[4727]: E1210 15:28:46.570427 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:28:51 crc kubenswrapper[4727]: E1210 15:28:51.565940 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:28:56 crc kubenswrapper[4727]: E1210 15:28:56.752543 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:28:56 crc kubenswrapper[4727]: E1210 15:28:56.754016 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:28:56 crc kubenswrapper[4727]: E1210 15:28:56.754261 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:28:56 crc kubenswrapper[4727]: E1210 15:28:56.755558 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:29:00 crc kubenswrapper[4727]: I1210 15:29:00.563279 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:29:00 crc kubenswrapper[4727]: E1210 15:29:00.563868 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:29:03 crc kubenswrapper[4727]: E1210 15:29:03.565966 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:29:08 crc kubenswrapper[4727]: E1210 15:29:08.566854 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:29:12 crc kubenswrapper[4727]: I1210 15:29:12.563285 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:29:12 crc kubenswrapper[4727]: E1210 15:29:12.564118 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:29:14 crc kubenswrapper[4727]: E1210 15:29:14.566975 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:29:22 crc kubenswrapper[4727]: E1210 15:29:22.566751 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:29:24 crc kubenswrapper[4727]: I1210 15:29:24.563458 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:29:24 crc kubenswrapper[4727]: E1210 15:29:24.564029 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:29:27 crc kubenswrapper[4727]: E1210 15:29:27.565481 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:29:34 crc kubenswrapper[4727]: E1210 15:29:34.565431 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:29:36 crc kubenswrapper[4727]: I1210 15:29:36.571879 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:29:36 crc kubenswrapper[4727]: E1210 15:29:36.572537 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:29:40 crc kubenswrapper[4727]: E1210 15:29:40.565736 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:29:47 crc kubenswrapper[4727]: I1210 15:29:47.563375 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:29:47 crc kubenswrapper[4727]: I1210 15:29:47.911088 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"77727fedf28eae66440b391e378f4f4ff14d64e2fd2001db5a1726dc5ac683da"} Dec 10 15:29:48 crc kubenswrapper[4727]: E1210 15:29:48.566243 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:29:51 crc kubenswrapper[4727]: E1210 15:29:51.565848 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.155614 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb"] Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.158242 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.163433 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.163472 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.167171 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb"] Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.330885 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0755e65b-af3b-4bea-ae78-5986fca7e5e4-secret-volume\") pod \"collect-profiles-29423010-bhvkb\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.331158 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5lkt\" (UniqueName: \"kubernetes.io/projected/0755e65b-af3b-4bea-ae78-5986fca7e5e4-kube-api-access-h5lkt\") pod \"collect-profiles-29423010-bhvkb\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.331242 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0755e65b-af3b-4bea-ae78-5986fca7e5e4-config-volume\") pod \"collect-profiles-29423010-bhvkb\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.433834 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5lkt\" (UniqueName: \"kubernetes.io/projected/0755e65b-af3b-4bea-ae78-5986fca7e5e4-kube-api-access-h5lkt\") pod \"collect-profiles-29423010-bhvkb\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.434051 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0755e65b-af3b-4bea-ae78-5986fca7e5e4-config-volume\") pod \"collect-profiles-29423010-bhvkb\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.434159 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0755e65b-af3b-4bea-ae78-5986fca7e5e4-secret-volume\") pod \"collect-profiles-29423010-bhvkb\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.435768 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0755e65b-af3b-4bea-ae78-5986fca7e5e4-config-volume\") pod \"collect-profiles-29423010-bhvkb\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.441549 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0755e65b-af3b-4bea-ae78-5986fca7e5e4-secret-volume\") pod \"collect-profiles-29423010-bhvkb\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.454025 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5lkt\" (UniqueName: \"kubernetes.io/projected/0755e65b-af3b-4bea-ae78-5986fca7e5e4-kube-api-access-h5lkt\") pod \"collect-profiles-29423010-bhvkb\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.487447 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:00 crc kubenswrapper[4727]: I1210 15:30:00.996625 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb"] Dec 10 15:30:01 crc kubenswrapper[4727]: I1210 15:30:01.035937 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" event={"ID":"0755e65b-af3b-4bea-ae78-5986fca7e5e4","Type":"ContainerStarted","Data":"9bf0276fe19da0dce2c2b4ff596c70d8af6205e54eddf3c53f5c3f91444e5186"} Dec 10 15:30:01 crc kubenswrapper[4727]: E1210 15:30:01.566046 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:30:02 crc kubenswrapper[4727]: I1210 15:30:02.046725 4727 generic.go:334] "Generic (PLEG): container finished" podID="0755e65b-af3b-4bea-ae78-5986fca7e5e4" containerID="16f9eeac5c66863ab75eac15ab0b284c36252670f024c09ac7ed8a3f53f339d0" exitCode=0 Dec 10 15:30:02 crc kubenswrapper[4727]: I1210 15:30:02.046827 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" event={"ID":"0755e65b-af3b-4bea-ae78-5986fca7e5e4","Type":"ContainerDied","Data":"16f9eeac5c66863ab75eac15ab0b284c36252670f024c09ac7ed8a3f53f339d0"} Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.525123 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.607470 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0755e65b-af3b-4bea-ae78-5986fca7e5e4-secret-volume\") pod \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.607582 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0755e65b-af3b-4bea-ae78-5986fca7e5e4-config-volume\") pod \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.607628 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5lkt\" (UniqueName: \"kubernetes.io/projected/0755e65b-af3b-4bea-ae78-5986fca7e5e4-kube-api-access-h5lkt\") pod \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\" (UID: \"0755e65b-af3b-4bea-ae78-5986fca7e5e4\") " Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.608708 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0755e65b-af3b-4bea-ae78-5986fca7e5e4-config-volume" (OuterVolumeSpecName: "config-volume") pod "0755e65b-af3b-4bea-ae78-5986fca7e5e4" (UID: "0755e65b-af3b-4bea-ae78-5986fca7e5e4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.614032 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0755e65b-af3b-4bea-ae78-5986fca7e5e4-kube-api-access-h5lkt" (OuterVolumeSpecName: "kube-api-access-h5lkt") pod "0755e65b-af3b-4bea-ae78-5986fca7e5e4" (UID: "0755e65b-af3b-4bea-ae78-5986fca7e5e4"). InnerVolumeSpecName "kube-api-access-h5lkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.614303 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0755e65b-af3b-4bea-ae78-5986fca7e5e4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0755e65b-af3b-4bea-ae78-5986fca7e5e4" (UID: "0755e65b-af3b-4bea-ae78-5986fca7e5e4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.711669 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0755e65b-af3b-4bea-ae78-5986fca7e5e4-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.711714 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5lkt\" (UniqueName: \"kubernetes.io/projected/0755e65b-af3b-4bea-ae78-5986fca7e5e4-kube-api-access-h5lkt\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:03 crc kubenswrapper[4727]: I1210 15:30:03.711731 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0755e65b-af3b-4bea-ae78-5986fca7e5e4-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:04 crc kubenswrapper[4727]: I1210 15:30:04.066233 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" event={"ID":"0755e65b-af3b-4bea-ae78-5986fca7e5e4","Type":"ContainerDied","Data":"9bf0276fe19da0dce2c2b4ff596c70d8af6205e54eddf3c53f5c3f91444e5186"} Dec 10 15:30:04 crc kubenswrapper[4727]: I1210 15:30:04.066294 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf0276fe19da0dce2c2b4ff596c70d8af6205e54eddf3c53f5c3f91444e5186" Dec 10 15:30:04 crc kubenswrapper[4727]: I1210 15:30:04.066308 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-bhvkb" Dec 10 15:30:04 crc kubenswrapper[4727]: E1210 15:30:04.565958 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:30:04 crc kubenswrapper[4727]: I1210 15:30:04.614488 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt"] Dec 10 15:30:04 crc kubenswrapper[4727]: I1210 15:30:04.623439 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-zvlgt"] Dec 10 15:30:06 crc kubenswrapper[4727]: I1210 15:30:06.578130 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cebada22-c034-440c-af94-882f29f42989" path="/var/lib/kubelet/pods/cebada22-c034-440c-af94-882f29f42989/volumes" Dec 10 15:30:12 crc kubenswrapper[4727]: E1210 15:30:12.566567 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:30:18 crc kubenswrapper[4727]: E1210 15:30:18.565645 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:30:22 crc kubenswrapper[4727]: I1210 15:30:22.664190 4727 scope.go:117] "RemoveContainer" containerID="e80c6048c15ace1d18c2716a561110a503ed21b16587f93646d21a09ceabdd6d" Dec 10 15:30:23 crc kubenswrapper[4727]: E1210 15:30:23.566098 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:30:32 crc kubenswrapper[4727]: E1210 15:30:32.564779 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:30:35 crc kubenswrapper[4727]: E1210 15:30:35.565731 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:30:46 crc kubenswrapper[4727]: E1210 15:30:46.575404 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:30:46 crc kubenswrapper[4727]: E1210 15:30:46.582169 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:30:57 crc kubenswrapper[4727]: E1210 15:30:57.566651 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:31:01 crc kubenswrapper[4727]: E1210 15:31:01.565780 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:31:12 crc kubenswrapper[4727]: E1210 15:31:12.565705 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.540673 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ffsmq"] Dec 10 15:31:13 crc kubenswrapper[4727]: E1210 15:31:13.541172 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0755e65b-af3b-4bea-ae78-5986fca7e5e4" containerName="collect-profiles" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.541188 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0755e65b-af3b-4bea-ae78-5986fca7e5e4" containerName="collect-profiles" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.541398 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0755e65b-af3b-4bea-ae78-5986fca7e5e4" containerName="collect-profiles" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.543114 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: E1210 15:31:13.564742 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.565753 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ffsmq"] Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.650129 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-catalog-content\") pod \"redhat-operators-ffsmq\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.650386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7pw\" (UniqueName: \"kubernetes.io/projected/2fa99925-e4b5-4bb7-9038-dac984468262-kube-api-access-gb7pw\") pod \"redhat-operators-ffsmq\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.650820 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-utilities\") pod \"redhat-operators-ffsmq\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.737122 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pgn8w"] Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.740124 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.752610 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb7pw\" (UniqueName: \"kubernetes.io/projected/2fa99925-e4b5-4bb7-9038-dac984468262-kube-api-access-gb7pw\") pod \"redhat-operators-ffsmq\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.752712 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-catalog-content\") pod \"redhat-marketplace-pgn8w\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.752764 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-utilities\") pod \"redhat-operators-ffsmq\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.752822 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4xj5\" (UniqueName: \"kubernetes.io/projected/c7f58891-f414-4aba-9e17-4e9fccbce8ed-kube-api-access-t4xj5\") pod \"redhat-marketplace-pgn8w\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.752854 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-utilities\") pod \"redhat-marketplace-pgn8w\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.752933 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-catalog-content\") pod \"redhat-operators-ffsmq\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.753550 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-utilities\") pod \"redhat-operators-ffsmq\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.753625 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-catalog-content\") pod \"redhat-operators-ffsmq\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.776513 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgn8w"] Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.790452 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb7pw\" (UniqueName: \"kubernetes.io/projected/2fa99925-e4b5-4bb7-9038-dac984468262-kube-api-access-gb7pw\") pod \"redhat-operators-ffsmq\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.854322 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4xj5\" (UniqueName: \"kubernetes.io/projected/c7f58891-f414-4aba-9e17-4e9fccbce8ed-kube-api-access-t4xj5\") pod \"redhat-marketplace-pgn8w\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.854383 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-utilities\") pod \"redhat-marketplace-pgn8w\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.854520 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-catalog-content\") pod \"redhat-marketplace-pgn8w\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.855177 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-utilities\") pod \"redhat-marketplace-pgn8w\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.855319 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-catalog-content\") pod \"redhat-marketplace-pgn8w\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.867568 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:13 crc kubenswrapper[4727]: I1210 15:31:13.877788 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4xj5\" (UniqueName: \"kubernetes.io/projected/c7f58891-f414-4aba-9e17-4e9fccbce8ed-kube-api-access-t4xj5\") pod \"redhat-marketplace-pgn8w\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:14 crc kubenswrapper[4727]: I1210 15:31:14.076442 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:14 crc kubenswrapper[4727]: I1210 15:31:14.451788 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ffsmq"] Dec 10 15:31:14 crc kubenswrapper[4727]: I1210 15:31:14.641998 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgn8w"] Dec 10 15:31:14 crc kubenswrapper[4727]: W1210 15:31:14.649357 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7f58891_f414_4aba_9e17_4e9fccbce8ed.slice/crio-78bc5563deba653e8e181cb94bcfd5cb4a40fbe22bf38e30c02f545a605dfd16 WatchSource:0}: Error finding container 78bc5563deba653e8e181cb94bcfd5cb4a40fbe22bf38e30c02f545a605dfd16: Status 404 returned error can't find the container with id 78bc5563deba653e8e181cb94bcfd5cb4a40fbe22bf38e30c02f545a605dfd16 Dec 10 15:31:14 crc kubenswrapper[4727]: I1210 15:31:14.832672 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgn8w" event={"ID":"c7f58891-f414-4aba-9e17-4e9fccbce8ed","Type":"ContainerStarted","Data":"78bc5563deba653e8e181cb94bcfd5cb4a40fbe22bf38e30c02f545a605dfd16"} Dec 10 15:31:14 crc kubenswrapper[4727]: I1210 15:31:14.837244 4727 generic.go:334] "Generic (PLEG): container finished" podID="2fa99925-e4b5-4bb7-9038-dac984468262" containerID="48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d" exitCode=0 Dec 10 15:31:14 crc kubenswrapper[4727]: I1210 15:31:14.837439 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffsmq" event={"ID":"2fa99925-e4b5-4bb7-9038-dac984468262","Type":"ContainerDied","Data":"48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d"} Dec 10 15:31:14 crc kubenswrapper[4727]: I1210 15:31:14.837532 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffsmq" event={"ID":"2fa99925-e4b5-4bb7-9038-dac984468262","Type":"ContainerStarted","Data":"185d6626b036a373eace6ddaaaa6cb190bc1e166f3b0eff68989378400c9075c"} Dec 10 15:31:15 crc kubenswrapper[4727]: I1210 15:31:15.857524 4727 generic.go:334] "Generic (PLEG): container finished" podID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerID="f3c73b2f03a9306e0dca914606ae524afe0c6ee74d311e42193fd6785a076b76" exitCode=0 Dec 10 15:31:15 crc kubenswrapper[4727]: I1210 15:31:15.857675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgn8w" event={"ID":"c7f58891-f414-4aba-9e17-4e9fccbce8ed","Type":"ContainerDied","Data":"f3c73b2f03a9306e0dca914606ae524afe0c6ee74d311e42193fd6785a076b76"} Dec 10 15:31:15 crc kubenswrapper[4727]: I1210 15:31:15.939553 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-74w2d"] Dec 10 15:31:15 crc kubenswrapper[4727]: I1210 15:31:15.942425 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:15 crc kubenswrapper[4727]: I1210 15:31:15.955574 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-74w2d"] Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.005573 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwcg\" (UniqueName: \"kubernetes.io/projected/5abb86a9-9d6e-49c4-8359-2207f3aa7490-kube-api-access-sjwcg\") pod \"community-operators-74w2d\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.005627 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-catalog-content\") pod \"community-operators-74w2d\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.005700 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-utilities\") pod \"community-operators-74w2d\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.107707 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-utilities\") pod \"community-operators-74w2d\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.107999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwcg\" (UniqueName: \"kubernetes.io/projected/5abb86a9-9d6e-49c4-8359-2207f3aa7490-kube-api-access-sjwcg\") pod \"community-operators-74w2d\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.108053 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-catalog-content\") pod \"community-operators-74w2d\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.108410 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-utilities\") pod \"community-operators-74w2d\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.108534 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-catalog-content\") pod \"community-operators-74w2d\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.215581 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwcg\" (UniqueName: \"kubernetes.io/projected/5abb86a9-9d6e-49c4-8359-2207f3aa7490-kube-api-access-sjwcg\") pod \"community-operators-74w2d\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.263747 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.878091 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-74w2d"] Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.891987 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgn8w" event={"ID":"c7f58891-f414-4aba-9e17-4e9fccbce8ed","Type":"ContainerStarted","Data":"37cbfac73a1a3c5c4e540bf8dfe69193b0bc88090346954d104b9bc59f267bd9"} Dec 10 15:31:16 crc kubenswrapper[4727]: I1210 15:31:16.899490 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffsmq" event={"ID":"2fa99925-e4b5-4bb7-9038-dac984468262","Type":"ContainerStarted","Data":"467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912"} Dec 10 15:31:17 crc kubenswrapper[4727]: I1210 15:31:17.912514 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74w2d" event={"ID":"5abb86a9-9d6e-49c4-8359-2207f3aa7490","Type":"ContainerStarted","Data":"edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898"} Dec 10 15:31:17 crc kubenswrapper[4727]: I1210 15:31:17.913067 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74w2d" event={"ID":"5abb86a9-9d6e-49c4-8359-2207f3aa7490","Type":"ContainerStarted","Data":"901113d6c0261180351ffa4c67923a76bae5a4fdfaed92bad769aec6a993c9cd"} Dec 10 15:31:17 crc kubenswrapper[4727]: I1210 15:31:17.915459 4727 generic.go:334] "Generic (PLEG): container finished" podID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerID="37cbfac73a1a3c5c4e540bf8dfe69193b0bc88090346954d104b9bc59f267bd9" exitCode=0 Dec 10 15:31:17 crc kubenswrapper[4727]: I1210 15:31:17.915529 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgn8w" event={"ID":"c7f58891-f414-4aba-9e17-4e9fccbce8ed","Type":"ContainerDied","Data":"37cbfac73a1a3c5c4e540bf8dfe69193b0bc88090346954d104b9bc59f267bd9"} Dec 10 15:31:18 crc kubenswrapper[4727]: I1210 15:31:18.928973 4727 generic.go:334] "Generic (PLEG): container finished" podID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerID="edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898" exitCode=0 Dec 10 15:31:18 crc kubenswrapper[4727]: I1210 15:31:18.929056 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74w2d" event={"ID":"5abb86a9-9d6e-49c4-8359-2207f3aa7490","Type":"ContainerDied","Data":"edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898"} Dec 10 15:31:20 crc kubenswrapper[4727]: I1210 15:31:20.952872 4727 generic.go:334] "Generic (PLEG): container finished" podID="2fa99925-e4b5-4bb7-9038-dac984468262" containerID="467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912" exitCode=0 Dec 10 15:31:20 crc kubenswrapper[4727]: I1210 15:31:20.952946 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffsmq" event={"ID":"2fa99925-e4b5-4bb7-9038-dac984468262","Type":"ContainerDied","Data":"467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912"} Dec 10 15:31:20 crc kubenswrapper[4727]: I1210 15:31:20.956958 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74w2d" event={"ID":"5abb86a9-9d6e-49c4-8359-2207f3aa7490","Type":"ContainerStarted","Data":"57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d"} Dec 10 15:31:20 crc kubenswrapper[4727]: I1210 15:31:20.961578 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgn8w" event={"ID":"c7f58891-f414-4aba-9e17-4e9fccbce8ed","Type":"ContainerStarted","Data":"5754cdc2ae36a44f31b45f59c346d40e33808a1f9f71a4782c0ba76d2253cc35"} Dec 10 15:31:21 crc kubenswrapper[4727]: I1210 15:31:21.039455 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pgn8w" podStartSLOduration=3.543554161 podStartE2EDuration="8.039436464s" podCreationTimestamp="2025-12-10 15:31:13 +0000 UTC" firstStartedPulling="2025-12-10 15:31:15.86066959 +0000 UTC m=+3580.055444122" lastFinishedPulling="2025-12-10 15:31:20.356551883 +0000 UTC m=+3584.551326425" observedRunningTime="2025-12-10 15:31:21.036756177 +0000 UTC m=+3585.231530729" watchObservedRunningTime="2025-12-10 15:31:21.039436464 +0000 UTC m=+3585.234211006" Dec 10 15:31:22 crc kubenswrapper[4727]: I1210 15:31:22.337519 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffsmq" event={"ID":"2fa99925-e4b5-4bb7-9038-dac984468262","Type":"ContainerStarted","Data":"8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b"} Dec 10 15:31:22 crc kubenswrapper[4727]: I1210 15:31:22.359156 4727 generic.go:334] "Generic (PLEG): container finished" podID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerID="57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d" exitCode=0 Dec 10 15:31:22 crc kubenswrapper[4727]: I1210 15:31:22.359261 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74w2d" event={"ID":"5abb86a9-9d6e-49c4-8359-2207f3aa7490","Type":"ContainerDied","Data":"57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d"} Dec 10 15:31:22 crc kubenswrapper[4727]: I1210 15:31:22.391763 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ffsmq" podStartSLOduration=2.796956527 podStartE2EDuration="9.391743338s" podCreationTimestamp="2025-12-10 15:31:13 +0000 UTC" firstStartedPulling="2025-12-10 15:31:14.840106706 +0000 UTC m=+3579.034881248" lastFinishedPulling="2025-12-10 15:31:21.434893517 +0000 UTC m=+3585.629668059" observedRunningTime="2025-12-10 15:31:22.378578887 +0000 UTC m=+3586.573353429" watchObservedRunningTime="2025-12-10 15:31:22.391743338 +0000 UTC m=+3586.586517880" Dec 10 15:31:23 crc kubenswrapper[4727]: E1210 15:31:23.564317 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:31:23 crc kubenswrapper[4727]: I1210 15:31:23.868032 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:23 crc kubenswrapper[4727]: I1210 15:31:23.868373 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:24 crc kubenswrapper[4727]: I1210 15:31:24.076702 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:24 crc kubenswrapper[4727]: I1210 15:31:24.076775 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:24 crc kubenswrapper[4727]: I1210 15:31:24.379607 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74w2d" event={"ID":"5abb86a9-9d6e-49c4-8359-2207f3aa7490","Type":"ContainerStarted","Data":"699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f"} Dec 10 15:31:24 crc kubenswrapper[4727]: I1210 15:31:24.407598 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-74w2d" podStartSLOduration=5.000296624 podStartE2EDuration="9.407578032s" podCreationTimestamp="2025-12-10 15:31:15 +0000 UTC" firstStartedPulling="2025-12-10 15:31:18.931766306 +0000 UTC m=+3583.126540858" lastFinishedPulling="2025-12-10 15:31:23.339047724 +0000 UTC m=+3587.533822266" observedRunningTime="2025-12-10 15:31:24.399059998 +0000 UTC m=+3588.593834540" watchObservedRunningTime="2025-12-10 15:31:24.407578032 +0000 UTC m=+3588.602352574" Dec 10 15:31:24 crc kubenswrapper[4727]: I1210 15:31:24.924099 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ffsmq" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" containerName="registry-server" probeResult="failure" output=< Dec 10 15:31:24 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 15:31:24 crc kubenswrapper[4727]: > Dec 10 15:31:25 crc kubenswrapper[4727]: I1210 15:31:25.130511 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-pgn8w" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerName="registry-server" probeResult="failure" output=< Dec 10 15:31:25 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 15:31:25 crc kubenswrapper[4727]: > Dec 10 15:31:26 crc kubenswrapper[4727]: I1210 15:31:26.264106 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:26 crc kubenswrapper[4727]: I1210 15:31:26.264283 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:26 crc kubenswrapper[4727]: I1210 15:31:26.325989 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:28 crc kubenswrapper[4727]: E1210 15:31:28.565189 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:31:33 crc kubenswrapper[4727]: I1210 15:31:33.919129 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:33 crc kubenswrapper[4727]: I1210 15:31:33.980161 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:34 crc kubenswrapper[4727]: I1210 15:31:34.129658 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:34 crc kubenswrapper[4727]: I1210 15:31:34.172512 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ffsmq"] Dec 10 15:31:34 crc kubenswrapper[4727]: I1210 15:31:34.193589 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:35 crc kubenswrapper[4727]: I1210 15:31:35.494971 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ffsmq" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" containerName="registry-server" containerID="cri-o://8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b" gracePeriod=2 Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.150036 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.326354 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.328860 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-catalog-content\") pod \"2fa99925-e4b5-4bb7-9038-dac984468262\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.329508 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-utilities\") pod \"2fa99925-e4b5-4bb7-9038-dac984468262\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.329669 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb7pw\" (UniqueName: \"kubernetes.io/projected/2fa99925-e4b5-4bb7-9038-dac984468262-kube-api-access-gb7pw\") pod \"2fa99925-e4b5-4bb7-9038-dac984468262\" (UID: \"2fa99925-e4b5-4bb7-9038-dac984468262\") " Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.330168 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-utilities" (OuterVolumeSpecName: "utilities") pod "2fa99925-e4b5-4bb7-9038-dac984468262" (UID: "2fa99925-e4b5-4bb7-9038-dac984468262"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.330870 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.339178 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa99925-e4b5-4bb7-9038-dac984468262-kube-api-access-gb7pw" (OuterVolumeSpecName: "kube-api-access-gb7pw") pod "2fa99925-e4b5-4bb7-9038-dac984468262" (UID: "2fa99925-e4b5-4bb7-9038-dac984468262"). InnerVolumeSpecName "kube-api-access-gb7pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.374850 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgn8w"] Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.375143 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pgn8w" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerName="registry-server" containerID="cri-o://5754cdc2ae36a44f31b45f59c346d40e33808a1f9f71a4782c0ba76d2253cc35" gracePeriod=2 Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.433465 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb7pw\" (UniqueName: \"kubernetes.io/projected/2fa99925-e4b5-4bb7-9038-dac984468262-kube-api-access-gb7pw\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.476718 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fa99925-e4b5-4bb7-9038-dac984468262" (UID: "2fa99925-e4b5-4bb7-9038-dac984468262"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.507130 4727 generic.go:334] "Generic (PLEG): container finished" podID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerID="5754cdc2ae36a44f31b45f59c346d40e33808a1f9f71a4782c0ba76d2253cc35" exitCode=0 Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.507255 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgn8w" event={"ID":"c7f58891-f414-4aba-9e17-4e9fccbce8ed","Type":"ContainerDied","Data":"5754cdc2ae36a44f31b45f59c346d40e33808a1f9f71a4782c0ba76d2253cc35"} Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.509868 4727 generic.go:334] "Generic (PLEG): container finished" podID="2fa99925-e4b5-4bb7-9038-dac984468262" containerID="8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b" exitCode=0 Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.509895 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffsmq" event={"ID":"2fa99925-e4b5-4bb7-9038-dac984468262","Type":"ContainerDied","Data":"8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b"} Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.509932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffsmq" event={"ID":"2fa99925-e4b5-4bb7-9038-dac984468262","Type":"ContainerDied","Data":"185d6626b036a373eace6ddaaaa6cb190bc1e166f3b0eff68989378400c9075c"} Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.509957 4727 scope.go:117] "RemoveContainer" containerID="8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.510109 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffsmq" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.535637 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa99925-e4b5-4bb7-9038-dac984468262-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.565639 4727 scope.go:117] "RemoveContainer" containerID="467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.607336 4727 scope.go:117] "RemoveContainer" containerID="48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.628962 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ffsmq"] Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.629002 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ffsmq"] Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.683987 4727 scope.go:117] "RemoveContainer" containerID="8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b" Dec 10 15:31:36 crc kubenswrapper[4727]: E1210 15:31:36.684452 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b\": container with ID starting with 8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b not found: ID does not exist" containerID="8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.684487 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b"} err="failed to get container status \"8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b\": rpc error: code = NotFound desc = could not find container \"8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b\": container with ID starting with 8bf5a38d8ffc72938d8863a54245151b30360efed1bd3a9abc02d777c5e4463b not found: ID does not exist" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.684510 4727 scope.go:117] "RemoveContainer" containerID="467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912" Dec 10 15:31:36 crc kubenswrapper[4727]: E1210 15:31:36.684852 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912\": container with ID starting with 467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912 not found: ID does not exist" containerID="467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.684874 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912"} err="failed to get container status \"467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912\": rpc error: code = NotFound desc = could not find container \"467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912\": container with ID starting with 467770d42ca025291238bfae2687afe0eab676352e06fff4b33073d1a0b95912 not found: ID does not exist" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.684887 4727 scope.go:117] "RemoveContainer" containerID="48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d" Dec 10 15:31:36 crc kubenswrapper[4727]: E1210 15:31:36.685251 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d\": container with ID starting with 48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d not found: ID does not exist" containerID="48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.685296 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d"} err="failed to get container status \"48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d\": rpc error: code = NotFound desc = could not find container \"48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d\": container with ID starting with 48ba5d808dcfd370308e23068686002c0e84a728e824bb577ec8be8d79f8cd1d not found: ID does not exist" Dec 10 15:31:36 crc kubenswrapper[4727]: I1210 15:31:36.962382 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.072075 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-utilities\") pod \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.072337 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4xj5\" (UniqueName: \"kubernetes.io/projected/c7f58891-f414-4aba-9e17-4e9fccbce8ed-kube-api-access-t4xj5\") pod \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.072415 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-catalog-content\") pod \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\" (UID: \"c7f58891-f414-4aba-9e17-4e9fccbce8ed\") " Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.075211 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-utilities" (OuterVolumeSpecName: "utilities") pod "c7f58891-f414-4aba-9e17-4e9fccbce8ed" (UID: "c7f58891-f414-4aba-9e17-4e9fccbce8ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.081745 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f58891-f414-4aba-9e17-4e9fccbce8ed-kube-api-access-t4xj5" (OuterVolumeSpecName: "kube-api-access-t4xj5") pod "c7f58891-f414-4aba-9e17-4e9fccbce8ed" (UID: "c7f58891-f414-4aba-9e17-4e9fccbce8ed"). InnerVolumeSpecName "kube-api-access-t4xj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.109760 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7f58891-f414-4aba-9e17-4e9fccbce8ed" (UID: "c7f58891-f414-4aba-9e17-4e9fccbce8ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.175559 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.175900 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f58891-f414-4aba-9e17-4e9fccbce8ed-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.175917 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4xj5\" (UniqueName: \"kubernetes.io/projected/c7f58891-f414-4aba-9e17-4e9fccbce8ed-kube-api-access-t4xj5\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.520900 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgn8w" event={"ID":"c7f58891-f414-4aba-9e17-4e9fccbce8ed","Type":"ContainerDied","Data":"78bc5563deba653e8e181cb94bcfd5cb4a40fbe22bf38e30c02f545a605dfd16"} Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.520973 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgn8w" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.520978 4727 scope.go:117] "RemoveContainer" containerID="5754cdc2ae36a44f31b45f59c346d40e33808a1f9f71a4782c0ba76d2253cc35" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.543452 4727 scope.go:117] "RemoveContainer" containerID="37cbfac73a1a3c5c4e540bf8dfe69193b0bc88090346954d104b9bc59f267bd9" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.566700 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgn8w"] Dec 10 15:31:37 crc kubenswrapper[4727]: E1210 15:31:37.572709 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.572947 4727 scope.go:117] "RemoveContainer" containerID="f3c73b2f03a9306e0dca914606ae524afe0c6ee74d311e42193fd6785a076b76" Dec 10 15:31:37 crc kubenswrapper[4727]: I1210 15:31:37.577419 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgn8w"] Dec 10 15:31:38 crc kubenswrapper[4727]: I1210 15:31:38.575706 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" path="/var/lib/kubelet/pods/2fa99925-e4b5-4bb7-9038-dac984468262/volumes" Dec 10 15:31:38 crc kubenswrapper[4727]: I1210 15:31:38.576548 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" path="/var/lib/kubelet/pods/c7f58891-f414-4aba-9e17-4e9fccbce8ed/volumes" Dec 10 15:31:38 crc kubenswrapper[4727]: I1210 15:31:38.763432 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-74w2d"] Dec 10 15:31:38 crc kubenswrapper[4727]: I1210 15:31:38.763709 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-74w2d" podUID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerName="registry-server" containerID="cri-o://699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f" gracePeriod=2 Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.389890 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.529381 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-catalog-content\") pod \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.529458 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-utilities\") pod \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.529491 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjwcg\" (UniqueName: \"kubernetes.io/projected/5abb86a9-9d6e-49c4-8359-2207f3aa7490-kube-api-access-sjwcg\") pod \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\" (UID: \"5abb86a9-9d6e-49c4-8359-2207f3aa7490\") " Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.531497 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-utilities" (OuterVolumeSpecName: "utilities") pod "5abb86a9-9d6e-49c4-8359-2207f3aa7490" (UID: "5abb86a9-9d6e-49c4-8359-2207f3aa7490"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.537265 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5abb86a9-9d6e-49c4-8359-2207f3aa7490-kube-api-access-sjwcg" (OuterVolumeSpecName: "kube-api-access-sjwcg") pod "5abb86a9-9d6e-49c4-8359-2207f3aa7490" (UID: "5abb86a9-9d6e-49c4-8359-2207f3aa7490"). InnerVolumeSpecName "kube-api-access-sjwcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.569305 4727 generic.go:334] "Generic (PLEG): container finished" podID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerID="699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f" exitCode=0 Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.569365 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74w2d" event={"ID":"5abb86a9-9d6e-49c4-8359-2207f3aa7490","Type":"ContainerDied","Data":"699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f"} Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.569399 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74w2d" event={"ID":"5abb86a9-9d6e-49c4-8359-2207f3aa7490","Type":"ContainerDied","Data":"901113d6c0261180351ffa4c67923a76bae5a4fdfaed92bad769aec6a993c9cd"} Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.569421 4727 scope.go:117] "RemoveContainer" containerID="699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.569594 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74w2d" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.590732 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5abb86a9-9d6e-49c4-8359-2207f3aa7490" (UID: "5abb86a9-9d6e-49c4-8359-2207f3aa7490"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.598695 4727 scope.go:117] "RemoveContainer" containerID="57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.619643 4727 scope.go:117] "RemoveContainer" containerID="edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.632991 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.633031 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5abb86a9-9d6e-49c4-8359-2207f3aa7490-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.633048 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjwcg\" (UniqueName: \"kubernetes.io/projected/5abb86a9-9d6e-49c4-8359-2207f3aa7490-kube-api-access-sjwcg\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.675646 4727 scope.go:117] "RemoveContainer" containerID="699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f" Dec 10 15:31:39 crc kubenswrapper[4727]: E1210 15:31:39.676259 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f\": container with ID starting with 699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f not found: ID does not exist" containerID="699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.676295 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f"} err="failed to get container status \"699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f\": rpc error: code = NotFound desc = could not find container \"699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f\": container with ID starting with 699d7ce801d27517e0d3e40a3dc73153d538fe33bbc8cbb6aa2b55395fcd065f not found: ID does not exist" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.676316 4727 scope.go:117] "RemoveContainer" containerID="57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d" Dec 10 15:31:39 crc kubenswrapper[4727]: E1210 15:31:39.676580 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d\": container with ID starting with 57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d not found: ID does not exist" containerID="57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.676600 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d"} err="failed to get container status \"57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d\": rpc error: code = NotFound desc = could not find container \"57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d\": container with ID starting with 57d2517b2c22f44557312ecbe87fcb6d18533a0d0c1d93423905d51897201c7d not found: ID does not exist" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.676614 4727 scope.go:117] "RemoveContainer" containerID="edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898" Dec 10 15:31:39 crc kubenswrapper[4727]: E1210 15:31:39.677103 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898\": container with ID starting with edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898 not found: ID does not exist" containerID="edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.677132 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898"} err="failed to get container status \"edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898\": rpc error: code = NotFound desc = could not find container \"edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898\": container with ID starting with edb88bc76d65c4564ca5dd076d56891afbde082916835494acba18f3c20c5898 not found: ID does not exist" Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.947708 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-74w2d"] Dec 10 15:31:39 crc kubenswrapper[4727]: I1210 15:31:39.959215 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-74w2d"] Dec 10 15:31:40 crc kubenswrapper[4727]: I1210 15:31:40.580161 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" path="/var/lib/kubelet/pods/5abb86a9-9d6e-49c4-8359-2207f3aa7490/volumes" Dec 10 15:31:41 crc kubenswrapper[4727]: E1210 15:31:41.565022 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:31:50 crc kubenswrapper[4727]: E1210 15:31:50.565654 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:31:53 crc kubenswrapper[4727]: E1210 15:31:53.567787 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:32:02 crc kubenswrapper[4727]: E1210 15:32:02.566362 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:32:07 crc kubenswrapper[4727]: I1210 15:32:07.726854 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:32:07 crc kubenswrapper[4727]: I1210 15:32:07.728456 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:32:08 crc kubenswrapper[4727]: E1210 15:32:08.566445 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:32:16 crc kubenswrapper[4727]: E1210 15:32:16.566747 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:32:21 crc kubenswrapper[4727]: E1210 15:32:21.566539 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:32:28 crc kubenswrapper[4727]: E1210 15:32:28.566103 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:32:32 crc kubenswrapper[4727]: E1210 15:32:32.566771 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:32:37 crc kubenswrapper[4727]: I1210 15:32:37.724039 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:32:37 crc kubenswrapper[4727]: I1210 15:32:37.724610 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:32:41 crc kubenswrapper[4727]: E1210 15:32:41.566311 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:32:46 crc kubenswrapper[4727]: E1210 15:32:46.590303 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:32:55 crc kubenswrapper[4727]: E1210 15:32:55.566242 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:32:59 crc kubenswrapper[4727]: E1210 15:32:59.565017 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:33:07 crc kubenswrapper[4727]: I1210 15:33:07.766112 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:33:07 crc kubenswrapper[4727]: I1210 15:33:07.766743 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:33:07 crc kubenswrapper[4727]: I1210 15:33:07.766802 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:33:07 crc kubenswrapper[4727]: I1210 15:33:07.768234 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77727fedf28eae66440b391e378f4f4ff14d64e2fd2001db5a1726dc5ac683da"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:33:07 crc kubenswrapper[4727]: I1210 15:33:07.768340 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://77727fedf28eae66440b391e378f4f4ff14d64e2fd2001db5a1726dc5ac683da" gracePeriod=600 Dec 10 15:33:08 crc kubenswrapper[4727]: I1210 15:33:08.542635 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="77727fedf28eae66440b391e378f4f4ff14d64e2fd2001db5a1726dc5ac683da" exitCode=0 Dec 10 15:33:08 crc kubenswrapper[4727]: I1210 15:33:08.542725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"77727fedf28eae66440b391e378f4f4ff14d64e2fd2001db5a1726dc5ac683da"} Dec 10 15:33:08 crc kubenswrapper[4727]: I1210 15:33:08.543160 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b"} Dec 10 15:33:08 crc kubenswrapper[4727]: I1210 15:33:08.543191 4727 scope.go:117] "RemoveContainer" containerID="86aa8d5973e29c07631ebd5da6dc4dac4ff31a1d09c544b698e7ec4a227ad8c3" Dec 10 15:33:09 crc kubenswrapper[4727]: E1210 15:33:09.564806 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:33:12 crc kubenswrapper[4727]: E1210 15:33:12.565943 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:33:22 crc kubenswrapper[4727]: E1210 15:33:22.688384 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:33:25 crc kubenswrapper[4727]: I1210 15:33:25.565431 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:33:25 crc kubenswrapper[4727]: E1210 15:33:25.689696 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:33:25 crc kubenswrapper[4727]: E1210 15:33:25.689751 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:33:25 crc kubenswrapper[4727]: E1210 15:33:25.689897 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:33:25 crc kubenswrapper[4727]: E1210 15:33:25.691105 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:33:34 crc kubenswrapper[4727]: E1210 15:33:34.615079 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:33:37 crc kubenswrapper[4727]: E1210 15:33:37.565662 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:33:49 crc kubenswrapper[4727]: E1210 15:33:49.564925 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:33:51 crc kubenswrapper[4727]: E1210 15:33:51.565230 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:34:00 crc kubenswrapper[4727]: E1210 15:34:00.709861 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:34:00 crc kubenswrapper[4727]: E1210 15:34:00.710395 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:34:00 crc kubenswrapper[4727]: E1210 15:34:00.710552 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:34:00 crc kubenswrapper[4727]: E1210 15:34:00.712055 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:34:03 crc kubenswrapper[4727]: E1210 15:34:03.566233 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:34:12 crc kubenswrapper[4727]: E1210 15:34:12.566371 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:34:17 crc kubenswrapper[4727]: E1210 15:34:17.564775 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:34:25 crc kubenswrapper[4727]: E1210 15:34:25.566519 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:34:28 crc kubenswrapper[4727]: E1210 15:34:28.564801 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:34:36 crc kubenswrapper[4727]: E1210 15:34:36.571347 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:34:37 crc kubenswrapper[4727]: I1210 15:34:37.481346 4727 generic.go:334] "Generic (PLEG): container finished" podID="491442b9-eec3-46e4-9b19-998f5fcd72af" containerID="f657eb1d2593dcef52f4d0652cf10c2f9476e9c14450cc778238481459f62439" exitCode=2 Dec 10 15:34:37 crc kubenswrapper[4727]: I1210 15:34:37.481437 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" event={"ID":"491442b9-eec3-46e4-9b19-998f5fcd72af","Type":"ContainerDied","Data":"f657eb1d2593dcef52f4d0652cf10c2f9476e9c14450cc778238481459f62439"} Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.033758 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.217493 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-ssh-key\") pod \"491442b9-eec3-46e4-9b19-998f5fcd72af\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.217568 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz5r8\" (UniqueName: \"kubernetes.io/projected/491442b9-eec3-46e4-9b19-998f5fcd72af-kube-api-access-xz5r8\") pod \"491442b9-eec3-46e4-9b19-998f5fcd72af\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.217762 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-inventory\") pod \"491442b9-eec3-46e4-9b19-998f5fcd72af\" (UID: \"491442b9-eec3-46e4-9b19-998f5fcd72af\") " Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.224310 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/491442b9-eec3-46e4-9b19-998f5fcd72af-kube-api-access-xz5r8" (OuterVolumeSpecName: "kube-api-access-xz5r8") pod "491442b9-eec3-46e4-9b19-998f5fcd72af" (UID: "491442b9-eec3-46e4-9b19-998f5fcd72af"). InnerVolumeSpecName "kube-api-access-xz5r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.247311 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "491442b9-eec3-46e4-9b19-998f5fcd72af" (UID: "491442b9-eec3-46e4-9b19-998f5fcd72af"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.256367 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-inventory" (OuterVolumeSpecName: "inventory") pod "491442b9-eec3-46e4-9b19-998f5fcd72af" (UID: "491442b9-eec3-46e4-9b19-998f5fcd72af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.321170 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.321220 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/491442b9-eec3-46e4-9b19-998f5fcd72af-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.321234 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz5r8\" (UniqueName: \"kubernetes.io/projected/491442b9-eec3-46e4-9b19-998f5fcd72af-kube-api-access-xz5r8\") on node \"crc\" DevicePath \"\"" Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.507303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" event={"ID":"491442b9-eec3-46e4-9b19-998f5fcd72af","Type":"ContainerDied","Data":"af28ac4e4f6443a3c642e1ae0904ff7a026253aa97d28a5c800de894ae7de168"} Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.507945 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af28ac4e4f6443a3c642e1ae0904ff7a026253aa97d28a5c800de894ae7de168" Dec 10 15:34:39 crc kubenswrapper[4727]: I1210 15:34:39.507487 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9" Dec 10 15:34:42 crc kubenswrapper[4727]: E1210 15:34:42.567369 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.334934 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qt9hp"] Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336554 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="491442b9-eec3-46e4-9b19-998f5fcd72af" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336571 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="491442b9-eec3-46e4-9b19-998f5fcd72af" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336587 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" containerName="extract-content" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336594 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" containerName="extract-content" Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336609 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerName="extract-content" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336618 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerName="extract-content" Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336635 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" containerName="extract-utilities" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336642 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" containerName="extract-utilities" Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336656 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerName="extract-utilities" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336661 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerName="extract-utilities" Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336673 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerName="extract-utilities" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336679 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerName="extract-utilities" Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336695 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerName="extract-content" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336701 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerName="extract-content" Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336709 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" containerName="registry-server" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336715 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" containerName="registry-server" Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336728 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerName="registry-server" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336733 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerName="registry-server" Dec 10 15:34:46 crc kubenswrapper[4727]: E1210 15:34:46.336743 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerName="registry-server" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336748 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerName="registry-server" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336970 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="491442b9-eec3-46e4-9b19-998f5fcd72af" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336982 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f58891-f414-4aba-9e17-4e9fccbce8ed" containerName="registry-server" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.336992 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa99925-e4b5-4bb7-9038-dac984468262" containerName="registry-server" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.337011 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5abb86a9-9d6e-49c4-8359-2207f3aa7490" containerName="registry-server" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.340726 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.384295 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qt9hp"] Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.479701 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-utilities\") pod \"certified-operators-qt9hp\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.479833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-catalog-content\") pod \"certified-operators-qt9hp\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.480198 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz6xj\" (UniqueName: \"kubernetes.io/projected/f4180db8-76a6-432c-82d4-bceef2e28813-kube-api-access-tz6xj\") pod \"certified-operators-qt9hp\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.582019 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-utilities\") pod \"certified-operators-qt9hp\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.582087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-catalog-content\") pod \"certified-operators-qt9hp\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.582176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz6xj\" (UniqueName: \"kubernetes.io/projected/f4180db8-76a6-432c-82d4-bceef2e28813-kube-api-access-tz6xj\") pod \"certified-operators-qt9hp\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.582890 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-utilities\") pod \"certified-operators-qt9hp\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.582935 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-catalog-content\") pod \"certified-operators-qt9hp\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.606260 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz6xj\" (UniqueName: \"kubernetes.io/projected/f4180db8-76a6-432c-82d4-bceef2e28813-kube-api-access-tz6xj\") pod \"certified-operators-qt9hp\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:46 crc kubenswrapper[4727]: I1210 15:34:46.714079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:47 crc kubenswrapper[4727]: I1210 15:34:47.268755 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qt9hp"] Dec 10 15:34:47 crc kubenswrapper[4727]: E1210 15:34:47.565803 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:34:47 crc kubenswrapper[4727]: I1210 15:34:47.581636 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4180db8-76a6-432c-82d4-bceef2e28813" containerID="6d42cc8f9b0ae67caf42768f85bf07d4899771e8623b03b7e3a494398b260ce8" exitCode=0 Dec 10 15:34:47 crc kubenswrapper[4727]: I1210 15:34:47.581703 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt9hp" event={"ID":"f4180db8-76a6-432c-82d4-bceef2e28813","Type":"ContainerDied","Data":"6d42cc8f9b0ae67caf42768f85bf07d4899771e8623b03b7e3a494398b260ce8"} Dec 10 15:34:47 crc kubenswrapper[4727]: I1210 15:34:47.581746 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt9hp" event={"ID":"f4180db8-76a6-432c-82d4-bceef2e28813","Type":"ContainerStarted","Data":"9cba1cef74dfd47d7ca29fa6c8d75d77781f3ff47d00d6fd2cca925048c30adf"} Dec 10 15:34:48 crc kubenswrapper[4727]: I1210 15:34:48.603230 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt9hp" event={"ID":"f4180db8-76a6-432c-82d4-bceef2e28813","Type":"ContainerStarted","Data":"8278f67f7697bd200199852e9d0487464a02517810d25d4037c389fb2e6003ff"} Dec 10 15:34:49 crc kubenswrapper[4727]: I1210 15:34:49.615480 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4180db8-76a6-432c-82d4-bceef2e28813" containerID="8278f67f7697bd200199852e9d0487464a02517810d25d4037c389fb2e6003ff" exitCode=0 Dec 10 15:34:49 crc kubenswrapper[4727]: I1210 15:34:49.615593 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt9hp" event={"ID":"f4180db8-76a6-432c-82d4-bceef2e28813","Type":"ContainerDied","Data":"8278f67f7697bd200199852e9d0487464a02517810d25d4037c389fb2e6003ff"} Dec 10 15:34:50 crc kubenswrapper[4727]: I1210 15:34:50.630573 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt9hp" event={"ID":"f4180db8-76a6-432c-82d4-bceef2e28813","Type":"ContainerStarted","Data":"9c38088fad3d34835c2e59383a3de70abd18d99e3aa7bf08398290b8e1600acc"} Dec 10 15:34:50 crc kubenswrapper[4727]: I1210 15:34:50.654873 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qt9hp" podStartSLOduration=2.222577207 podStartE2EDuration="4.654849859s" podCreationTimestamp="2025-12-10 15:34:46 +0000 UTC" firstStartedPulling="2025-12-10 15:34:47.60603863 +0000 UTC m=+3791.800813172" lastFinishedPulling="2025-12-10 15:34:50.038311282 +0000 UTC m=+3794.233085824" observedRunningTime="2025-12-10 15:34:50.647134196 +0000 UTC m=+3794.841908748" watchObservedRunningTime="2025-12-10 15:34:50.654849859 +0000 UTC m=+3794.849624401" Dec 10 15:34:56 crc kubenswrapper[4727]: I1210 15:34:56.714948 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:56 crc kubenswrapper[4727]: I1210 15:34:56.715735 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:56 crc kubenswrapper[4727]: I1210 15:34:56.769606 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:57 crc kubenswrapper[4727]: E1210 15:34:57.567339 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:34:57 crc kubenswrapper[4727]: I1210 15:34:57.750318 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:34:57 crc kubenswrapper[4727]: I1210 15:34:57.805172 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qt9hp"] Dec 10 15:34:59 crc kubenswrapper[4727]: I1210 15:34:59.716602 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qt9hp" podUID="f4180db8-76a6-432c-82d4-bceef2e28813" containerName="registry-server" containerID="cri-o://9c38088fad3d34835c2e59383a3de70abd18d99e3aa7bf08398290b8e1600acc" gracePeriod=2 Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.730721 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4180db8-76a6-432c-82d4-bceef2e28813" containerID="9c38088fad3d34835c2e59383a3de70abd18d99e3aa7bf08398290b8e1600acc" exitCode=0 Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.730806 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt9hp" event={"ID":"f4180db8-76a6-432c-82d4-bceef2e28813","Type":"ContainerDied","Data":"9c38088fad3d34835c2e59383a3de70abd18d99e3aa7bf08398290b8e1600acc"} Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.731350 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt9hp" event={"ID":"f4180db8-76a6-432c-82d4-bceef2e28813","Type":"ContainerDied","Data":"9cba1cef74dfd47d7ca29fa6c8d75d77781f3ff47d00d6fd2cca925048c30adf"} Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.731379 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cba1cef74dfd47d7ca29fa6c8d75d77781f3ff47d00d6fd2cca925048c30adf" Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.750888 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.822605 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-utilities\") pod \"f4180db8-76a6-432c-82d4-bceef2e28813\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.822797 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz6xj\" (UniqueName: \"kubernetes.io/projected/f4180db8-76a6-432c-82d4-bceef2e28813-kube-api-access-tz6xj\") pod \"f4180db8-76a6-432c-82d4-bceef2e28813\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.822833 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-catalog-content\") pod \"f4180db8-76a6-432c-82d4-bceef2e28813\" (UID: \"f4180db8-76a6-432c-82d4-bceef2e28813\") " Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.824590 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-utilities" (OuterVolumeSpecName: "utilities") pod "f4180db8-76a6-432c-82d4-bceef2e28813" (UID: "f4180db8-76a6-432c-82d4-bceef2e28813"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.825685 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.833260 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4180db8-76a6-432c-82d4-bceef2e28813-kube-api-access-tz6xj" (OuterVolumeSpecName: "kube-api-access-tz6xj") pod "f4180db8-76a6-432c-82d4-bceef2e28813" (UID: "f4180db8-76a6-432c-82d4-bceef2e28813"). InnerVolumeSpecName "kube-api-access-tz6xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.896357 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4180db8-76a6-432c-82d4-bceef2e28813" (UID: "f4180db8-76a6-432c-82d4-bceef2e28813"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.928932 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz6xj\" (UniqueName: \"kubernetes.io/projected/f4180db8-76a6-432c-82d4-bceef2e28813-kube-api-access-tz6xj\") on node \"crc\" DevicePath \"\"" Dec 10 15:35:00 crc kubenswrapper[4727]: I1210 15:35:00.928980 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4180db8-76a6-432c-82d4-bceef2e28813-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:35:01 crc kubenswrapper[4727]: E1210 15:35:01.566227 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:35:01 crc kubenswrapper[4727]: I1210 15:35:01.740353 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qt9hp" Dec 10 15:35:01 crc kubenswrapper[4727]: I1210 15:35:01.782114 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qt9hp"] Dec 10 15:35:01 crc kubenswrapper[4727]: I1210 15:35:01.794578 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qt9hp"] Dec 10 15:35:02 crc kubenswrapper[4727]: I1210 15:35:02.576071 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4180db8-76a6-432c-82d4-bceef2e28813" path="/var/lib/kubelet/pods/f4180db8-76a6-432c-82d4-bceef2e28813/volumes" Dec 10 15:35:09 crc kubenswrapper[4727]: E1210 15:35:09.565788 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:35:14 crc kubenswrapper[4727]: E1210 15:35:14.565202 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:35:21 crc kubenswrapper[4727]: E1210 15:35:21.565650 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:35:26 crc kubenswrapper[4727]: E1210 15:35:26.572471 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:35:34 crc kubenswrapper[4727]: E1210 15:35:34.565937 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:35:37 crc kubenswrapper[4727]: I1210 15:35:37.723732 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:35:37 crc kubenswrapper[4727]: I1210 15:35:37.724287 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:35:41 crc kubenswrapper[4727]: E1210 15:35:41.565078 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:35:48 crc kubenswrapper[4727]: E1210 15:35:48.576202 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:35:54 crc kubenswrapper[4727]: E1210 15:35:54.565828 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.037677 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs"] Dec 10 15:35:57 crc kubenswrapper[4727]: E1210 15:35:57.038584 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4180db8-76a6-432c-82d4-bceef2e28813" containerName="registry-server" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.038604 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4180db8-76a6-432c-82d4-bceef2e28813" containerName="registry-server" Dec 10 15:35:57 crc kubenswrapper[4727]: E1210 15:35:57.038621 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4180db8-76a6-432c-82d4-bceef2e28813" containerName="extract-utilities" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.038627 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4180db8-76a6-432c-82d4-bceef2e28813" containerName="extract-utilities" Dec 10 15:35:57 crc kubenswrapper[4727]: E1210 15:35:57.038654 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4180db8-76a6-432c-82d4-bceef2e28813" containerName="extract-content" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.038663 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4180db8-76a6-432c-82d4-bceef2e28813" containerName="extract-content" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.038940 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4180db8-76a6-432c-82d4-bceef2e28813" containerName="registry-server" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.039926 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.047670 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.047946 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.048182 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.048917 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.049513 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs"] Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.103560 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b46zs\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.104627 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b46zs\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.104863 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js926\" (UniqueName: \"kubernetes.io/projected/791c3204-0b3f-4004-8871-8af969076bc2-kube-api-access-js926\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b46zs\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.208122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b46zs\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.208220 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b46zs\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.208348 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js926\" (UniqueName: \"kubernetes.io/projected/791c3204-0b3f-4004-8871-8af969076bc2-kube-api-access-js926\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b46zs\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.214497 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b46zs\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.215646 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b46zs\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.227757 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js926\" (UniqueName: \"kubernetes.io/projected/791c3204-0b3f-4004-8871-8af969076bc2-kube-api-access-js926\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b46zs\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.373770 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:35:57 crc kubenswrapper[4727]: I1210 15:35:57.964410 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs"] Dec 10 15:35:58 crc kubenswrapper[4727]: I1210 15:35:58.387799 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" event={"ID":"791c3204-0b3f-4004-8871-8af969076bc2","Type":"ContainerStarted","Data":"853bd594c7a72aa45ebad4e3d575418445cdad379551cf55c26feaad9e8673f0"} Dec 10 15:35:59 crc kubenswrapper[4727]: I1210 15:35:59.402465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" event={"ID":"791c3204-0b3f-4004-8871-8af969076bc2","Type":"ContainerStarted","Data":"e919b2147f1a9caeb93cf42d7b8b8d956c785385a2af621adf82cd514483f3e9"} Dec 10 15:35:59 crc kubenswrapper[4727]: I1210 15:35:59.435318 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" podStartSLOduration=1.7971363569999999 podStartE2EDuration="2.435278262s" podCreationTimestamp="2025-12-10 15:35:57 +0000 UTC" firstStartedPulling="2025-12-10 15:35:57.968877782 +0000 UTC m=+3862.163652324" lastFinishedPulling="2025-12-10 15:35:58.607019687 +0000 UTC m=+3862.801794229" observedRunningTime="2025-12-10 15:35:59.426497851 +0000 UTC m=+3863.621272393" watchObservedRunningTime="2025-12-10 15:35:59.435278262 +0000 UTC m=+3863.630052804" Dec 10 15:36:03 crc kubenswrapper[4727]: E1210 15:36:03.566028 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:36:07 crc kubenswrapper[4727]: I1210 15:36:07.725395 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:36:07 crc kubenswrapper[4727]: I1210 15:36:07.725986 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:36:08 crc kubenswrapper[4727]: E1210 15:36:08.567412 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:36:18 crc kubenswrapper[4727]: E1210 15:36:18.566257 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:36:23 crc kubenswrapper[4727]: E1210 15:36:23.567272 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:36:33 crc kubenswrapper[4727]: E1210 15:36:33.566119 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:36:36 crc kubenswrapper[4727]: E1210 15:36:36.587298 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:36:37 crc kubenswrapper[4727]: I1210 15:36:37.724020 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:36:37 crc kubenswrapper[4727]: I1210 15:36:37.724379 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:36:37 crc kubenswrapper[4727]: I1210 15:36:37.724442 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:36:37 crc kubenswrapper[4727]: I1210 15:36:37.725625 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:36:37 crc kubenswrapper[4727]: I1210 15:36:37.725698 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" gracePeriod=600 Dec 10 15:36:37 crc kubenswrapper[4727]: E1210 15:36:37.862516 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:36:37 crc kubenswrapper[4727]: I1210 15:36:37.970176 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" exitCode=0 Dec 10 15:36:37 crc kubenswrapper[4727]: I1210 15:36:37.970257 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b"} Dec 10 15:36:37 crc kubenswrapper[4727]: I1210 15:36:37.970496 4727 scope.go:117] "RemoveContainer" containerID="77727fedf28eae66440b391e378f4f4ff14d64e2fd2001db5a1726dc5ac683da" Dec 10 15:36:37 crc kubenswrapper[4727]: I1210 15:36:37.971356 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:36:37 crc kubenswrapper[4727]: E1210 15:36:37.971674 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:36:45 crc kubenswrapper[4727]: E1210 15:36:45.567975 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:36:48 crc kubenswrapper[4727]: E1210 15:36:48.566792 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:36:49 crc kubenswrapper[4727]: I1210 15:36:49.563142 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:36:49 crc kubenswrapper[4727]: E1210 15:36:49.563787 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:37:00 crc kubenswrapper[4727]: E1210 15:37:00.567740 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:37:01 crc kubenswrapper[4727]: E1210 15:37:01.565610 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:37:02 crc kubenswrapper[4727]: I1210 15:37:02.567837 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:37:02 crc kubenswrapper[4727]: E1210 15:37:02.568519 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:37:11 crc kubenswrapper[4727]: E1210 15:37:11.568228 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:37:12 crc kubenswrapper[4727]: E1210 15:37:12.564836 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:37:17 crc kubenswrapper[4727]: I1210 15:37:17.563803 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:37:17 crc kubenswrapper[4727]: E1210 15:37:17.564444 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:37:23 crc kubenswrapper[4727]: E1210 15:37:23.566350 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:37:26 crc kubenswrapper[4727]: E1210 15:37:26.575192 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:37:32 crc kubenswrapper[4727]: I1210 15:37:32.564374 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:37:32 crc kubenswrapper[4727]: E1210 15:37:32.565212 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:37:38 crc kubenswrapper[4727]: E1210 15:37:38.565811 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:37:40 crc kubenswrapper[4727]: E1210 15:37:40.565895 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:37:45 crc kubenswrapper[4727]: I1210 15:37:45.563337 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:37:45 crc kubenswrapper[4727]: E1210 15:37:45.564011 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:37:51 crc kubenswrapper[4727]: E1210 15:37:51.566025 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:37:53 crc kubenswrapper[4727]: E1210 15:37:53.564501 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:37:58 crc kubenswrapper[4727]: I1210 15:37:58.563650 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:37:58 crc kubenswrapper[4727]: E1210 15:37:58.564588 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:38:05 crc kubenswrapper[4727]: E1210 15:38:05.567275 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:38:05 crc kubenswrapper[4727]: E1210 15:38:05.567269 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:38:10 crc kubenswrapper[4727]: I1210 15:38:10.569030 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:38:10 crc kubenswrapper[4727]: E1210 15:38:10.569883 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:38:17 crc kubenswrapper[4727]: E1210 15:38:17.565611 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:38:18 crc kubenswrapper[4727]: E1210 15:38:18.564454 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:38:21 crc kubenswrapper[4727]: I1210 15:38:21.563073 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:38:21 crc kubenswrapper[4727]: E1210 15:38:21.563797 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:38:29 crc kubenswrapper[4727]: I1210 15:38:29.565717 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:38:29 crc kubenswrapper[4727]: E1210 15:38:29.690617 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:38:29 crc kubenswrapper[4727]: E1210 15:38:29.690676 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:38:29 crc kubenswrapper[4727]: E1210 15:38:29.690960 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:38:29 crc kubenswrapper[4727]: E1210 15:38:29.692155 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:38:30 crc kubenswrapper[4727]: E1210 15:38:30.565224 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:38:35 crc kubenswrapper[4727]: I1210 15:38:35.563869 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:38:35 crc kubenswrapper[4727]: E1210 15:38:35.564629 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:38:43 crc kubenswrapper[4727]: E1210 15:38:43.566537 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:38:44 crc kubenswrapper[4727]: E1210 15:38:44.564700 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:38:46 crc kubenswrapper[4727]: I1210 15:38:46.573033 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:38:46 crc kubenswrapper[4727]: E1210 15:38:46.574395 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:38:54 crc kubenswrapper[4727]: E1210 15:38:54.566938 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:38:56 crc kubenswrapper[4727]: E1210 15:38:56.575365 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:38:59 crc kubenswrapper[4727]: I1210 15:38:59.563928 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:38:59 crc kubenswrapper[4727]: E1210 15:38:59.565556 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:39:06 crc kubenswrapper[4727]: E1210 15:39:06.573786 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:39:10 crc kubenswrapper[4727]: E1210 15:39:10.654242 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:39:10 crc kubenswrapper[4727]: E1210 15:39:10.654786 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:39:10 crc kubenswrapper[4727]: E1210 15:39:10.655025 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:39:10 crc kubenswrapper[4727]: E1210 15:39:10.657123 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:39:12 crc kubenswrapper[4727]: I1210 15:39:12.563843 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:39:12 crc kubenswrapper[4727]: E1210 15:39:12.564534 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:39:21 crc kubenswrapper[4727]: E1210 15:39:21.566366 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:39:22 crc kubenswrapper[4727]: E1210 15:39:22.565798 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:39:26 crc kubenswrapper[4727]: I1210 15:39:26.573637 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:39:26 crc kubenswrapper[4727]: E1210 15:39:26.574450 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:39:33 crc kubenswrapper[4727]: E1210 15:39:33.565233 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:39:37 crc kubenswrapper[4727]: E1210 15:39:37.565412 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:39:38 crc kubenswrapper[4727]: I1210 15:39:38.563718 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:39:38 crc kubenswrapper[4727]: E1210 15:39:38.564603 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:39:45 crc kubenswrapper[4727]: E1210 15:39:45.565551 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:39:52 crc kubenswrapper[4727]: I1210 15:39:52.563455 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:39:52 crc kubenswrapper[4727]: E1210 15:39:52.564342 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:39:52 crc kubenswrapper[4727]: E1210 15:39:52.566107 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:39:56 crc kubenswrapper[4727]: E1210 15:39:56.573637 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:40:03 crc kubenswrapper[4727]: I1210 15:40:03.564035 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:40:03 crc kubenswrapper[4727]: E1210 15:40:03.564891 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:40:03 crc kubenswrapper[4727]: E1210 15:40:03.568090 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:40:09 crc kubenswrapper[4727]: E1210 15:40:09.566053 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:40:16 crc kubenswrapper[4727]: E1210 15:40:16.572988 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:40:17 crc kubenswrapper[4727]: I1210 15:40:17.564210 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:40:17 crc kubenswrapper[4727]: E1210 15:40:17.564923 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:40:23 crc kubenswrapper[4727]: E1210 15:40:23.565593 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:40:28 crc kubenswrapper[4727]: I1210 15:40:28.564315 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:40:28 crc kubenswrapper[4727]: E1210 15:40:28.564976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:40:28 crc kubenswrapper[4727]: E1210 15:40:28.565458 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:40:35 crc kubenswrapper[4727]: E1210 15:40:35.565065 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:40:40 crc kubenswrapper[4727]: I1210 15:40:40.571795 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:40:40 crc kubenswrapper[4727]: E1210 15:40:40.572608 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:40:42 crc kubenswrapper[4727]: E1210 15:40:42.608269 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:40:49 crc kubenswrapper[4727]: E1210 15:40:49.565486 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:40:51 crc kubenswrapper[4727]: I1210 15:40:51.562475 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:40:51 crc kubenswrapper[4727]: E1210 15:40:51.563000 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:40:57 crc kubenswrapper[4727]: E1210 15:40:57.565856 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:41:01 crc kubenswrapper[4727]: E1210 15:41:01.565292 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:41:02 crc kubenswrapper[4727]: I1210 15:41:02.563693 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:41:02 crc kubenswrapper[4727]: E1210 15:41:02.564012 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:41:08 crc kubenswrapper[4727]: E1210 15:41:08.565632 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.297188 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qv85f"] Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.342508 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.348414 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv85f"] Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.483813 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-utilities\") pod \"redhat-marketplace-qv85f\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.484211 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wjc\" (UniqueName: \"kubernetes.io/projected/cc310730-512f-4993-b5de-a16b1f5d51e1-kube-api-access-m4wjc\") pod \"redhat-marketplace-qv85f\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.484623 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-catalog-content\") pod \"redhat-marketplace-qv85f\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.587038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-utilities\") pod \"redhat-marketplace-qv85f\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.587182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wjc\" (UniqueName: \"kubernetes.io/projected/cc310730-512f-4993-b5de-a16b1f5d51e1-kube-api-access-m4wjc\") pod \"redhat-marketplace-qv85f\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.587239 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-catalog-content\") pod \"redhat-marketplace-qv85f\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.587678 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-utilities\") pod \"redhat-marketplace-qv85f\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.587791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-catalog-content\") pod \"redhat-marketplace-qv85f\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.606180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wjc\" (UniqueName: \"kubernetes.io/projected/cc310730-512f-4993-b5de-a16b1f5d51e1-kube-api-access-m4wjc\") pod \"redhat-marketplace-qv85f\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:14 crc kubenswrapper[4727]: I1210 15:41:14.681403 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:15 crc kubenswrapper[4727]: I1210 15:41:15.224229 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv85f"] Dec 10 15:41:15 crc kubenswrapper[4727]: I1210 15:41:15.399017 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv85f" event={"ID":"cc310730-512f-4993-b5de-a16b1f5d51e1","Type":"ContainerStarted","Data":"eaefb182038d9415f100fdcb6da377c6b5e097b9eb6b39ca168a38acad6e8f51"} Dec 10 15:41:15 crc kubenswrapper[4727]: E1210 15:41:15.564889 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:41:16 crc kubenswrapper[4727]: I1210 15:41:16.413148 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerID="5e670425c4419483051986d9a4f10d6abfe891473922efffc469728779fc1c69" exitCode=0 Dec 10 15:41:16 crc kubenswrapper[4727]: I1210 15:41:16.413212 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv85f" event={"ID":"cc310730-512f-4993-b5de-a16b1f5d51e1","Type":"ContainerDied","Data":"5e670425c4419483051986d9a4f10d6abfe891473922efffc469728779fc1c69"} Dec 10 15:41:17 crc kubenswrapper[4727]: I1210 15:41:17.563998 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:41:17 crc kubenswrapper[4727]: E1210 15:41:17.564755 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:41:18 crc kubenswrapper[4727]: I1210 15:41:18.441062 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerID="87464f0986baeb0e032f4cd810bddf81f33582eb4fbf092248b9c49469034eb8" exitCode=0 Dec 10 15:41:18 crc kubenswrapper[4727]: I1210 15:41:18.441101 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv85f" event={"ID":"cc310730-512f-4993-b5de-a16b1f5d51e1","Type":"ContainerDied","Data":"87464f0986baeb0e032f4cd810bddf81f33582eb4fbf092248b9c49469034eb8"} Dec 10 15:41:19 crc kubenswrapper[4727]: I1210 15:41:19.453204 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv85f" event={"ID":"cc310730-512f-4993-b5de-a16b1f5d51e1","Type":"ContainerStarted","Data":"1be44f117fc97a2744c016b94a23dd807080f0ecafc0534e57ae3ab64c936c19"} Dec 10 15:41:19 crc kubenswrapper[4727]: I1210 15:41:19.488146 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qv85f" podStartSLOduration=3.047361978 podStartE2EDuration="5.488087623s" podCreationTimestamp="2025-12-10 15:41:14 +0000 UTC" firstStartedPulling="2025-12-10 15:41:16.416464803 +0000 UTC m=+4180.611239385" lastFinishedPulling="2025-12-10 15:41:18.857190488 +0000 UTC m=+4183.051965030" observedRunningTime="2025-12-10 15:41:19.476404248 +0000 UTC m=+4183.671178800" watchObservedRunningTime="2025-12-10 15:41:19.488087623 +0000 UTC m=+4183.682862175" Dec 10 15:41:23 crc kubenswrapper[4727]: I1210 15:41:23.079193 4727 scope.go:117] "RemoveContainer" containerID="9c38088fad3d34835c2e59383a3de70abd18d99e3aa7bf08398290b8e1600acc" Dec 10 15:41:23 crc kubenswrapper[4727]: I1210 15:41:23.112091 4727 scope.go:117] "RemoveContainer" containerID="6d42cc8f9b0ae67caf42768f85bf07d4899771e8623b03b7e3a494398b260ce8" Dec 10 15:41:23 crc kubenswrapper[4727]: I1210 15:41:23.153188 4727 scope.go:117] "RemoveContainer" containerID="8278f67f7697bd200199852e9d0487464a02517810d25d4037c389fb2e6003ff" Dec 10 15:41:23 crc kubenswrapper[4727]: E1210 15:41:23.566166 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:41:24 crc kubenswrapper[4727]: I1210 15:41:24.682316 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:24 crc kubenswrapper[4727]: I1210 15:41:24.682686 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:24 crc kubenswrapper[4727]: I1210 15:41:24.742467 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:26 crc kubenswrapper[4727]: I1210 15:41:26.077442 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:26 crc kubenswrapper[4727]: I1210 15:41:26.142234 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv85f"] Dec 10 15:41:26 crc kubenswrapper[4727]: E1210 15:41:26.581475 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:41:27 crc kubenswrapper[4727]: I1210 15:41:27.675716 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qv85f" podUID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerName="registry-server" containerID="cri-o://1be44f117fc97a2744c016b94a23dd807080f0ecafc0534e57ae3ab64c936c19" gracePeriod=2 Dec 10 15:41:28 crc kubenswrapper[4727]: I1210 15:41:28.697147 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerID="1be44f117fc97a2744c016b94a23dd807080f0ecafc0534e57ae3ab64c936c19" exitCode=0 Dec 10 15:41:28 crc kubenswrapper[4727]: I1210 15:41:28.697217 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv85f" event={"ID":"cc310730-512f-4993-b5de-a16b1f5d51e1","Type":"ContainerDied","Data":"1be44f117fc97a2744c016b94a23dd807080f0ecafc0534e57ae3ab64c936c19"} Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.064366 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.127425 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4wjc\" (UniqueName: \"kubernetes.io/projected/cc310730-512f-4993-b5de-a16b1f5d51e1-kube-api-access-m4wjc\") pod \"cc310730-512f-4993-b5de-a16b1f5d51e1\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.127578 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-catalog-content\") pod \"cc310730-512f-4993-b5de-a16b1f5d51e1\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.127708 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-utilities\") pod \"cc310730-512f-4993-b5de-a16b1f5d51e1\" (UID: \"cc310730-512f-4993-b5de-a16b1f5d51e1\") " Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.128672 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-utilities" (OuterVolumeSpecName: "utilities") pod "cc310730-512f-4993-b5de-a16b1f5d51e1" (UID: "cc310730-512f-4993-b5de-a16b1f5d51e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.129429 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.133351 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc310730-512f-4993-b5de-a16b1f5d51e1-kube-api-access-m4wjc" (OuterVolumeSpecName: "kube-api-access-m4wjc") pod "cc310730-512f-4993-b5de-a16b1f5d51e1" (UID: "cc310730-512f-4993-b5de-a16b1f5d51e1"). InnerVolumeSpecName "kube-api-access-m4wjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.159898 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc310730-512f-4993-b5de-a16b1f5d51e1" (UID: "cc310730-512f-4993-b5de-a16b1f5d51e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.232461 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4wjc\" (UniqueName: \"kubernetes.io/projected/cc310730-512f-4993-b5de-a16b1f5d51e1-kube-api-access-m4wjc\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.232523 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc310730-512f-4993-b5de-a16b1f5d51e1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.709143 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv85f" event={"ID":"cc310730-512f-4993-b5de-a16b1f5d51e1","Type":"ContainerDied","Data":"eaefb182038d9415f100fdcb6da377c6b5e097b9eb6b39ca168a38acad6e8f51"} Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.710715 4727 scope.go:117] "RemoveContainer" containerID="1be44f117fc97a2744c016b94a23dd807080f0ecafc0534e57ae3ab64c936c19" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.711368 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qv85f" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.734099 4727 scope.go:117] "RemoveContainer" containerID="87464f0986baeb0e032f4cd810bddf81f33582eb4fbf092248b9c49469034eb8" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.763524 4727 scope.go:117] "RemoveContainer" containerID="5e670425c4419483051986d9a4f10d6abfe891473922efffc469728779fc1c69" Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.778423 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv85f"] Dec 10 15:41:29 crc kubenswrapper[4727]: I1210 15:41:29.796167 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv85f"] Dec 10 15:41:30 crc kubenswrapper[4727]: I1210 15:41:30.564439 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:41:30 crc kubenswrapper[4727]: E1210 15:41:30.564748 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:41:30 crc kubenswrapper[4727]: I1210 15:41:30.576770 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc310730-512f-4993-b5de-a16b1f5d51e1" path="/var/lib/kubelet/pods/cc310730-512f-4993-b5de-a16b1f5d51e1/volumes" Dec 10 15:41:37 crc kubenswrapper[4727]: E1210 15:41:37.565411 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:41:40 crc kubenswrapper[4727]: E1210 15:41:40.568317 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:41:44 crc kubenswrapper[4727]: I1210 15:41:44.564403 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:41:44 crc kubenswrapper[4727]: I1210 15:41:44.855897 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"19871bb1326002a5b787fe9cdab6775b032466dfefd63b7f0ba220e2030b241f"} Dec 10 15:41:50 crc kubenswrapper[4727]: E1210 15:41:50.565432 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:41:52 crc kubenswrapper[4727]: E1210 15:41:52.564996 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:42:03 crc kubenswrapper[4727]: E1210 15:42:03.564748 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:42:04 crc kubenswrapper[4727]: E1210 15:42:04.671075 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:42:17 crc kubenswrapper[4727]: E1210 15:42:17.565526 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:42:18 crc kubenswrapper[4727]: E1210 15:42:18.566266 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:42:20 crc kubenswrapper[4727]: I1210 15:42:20.211689 4727 generic.go:334] "Generic (PLEG): container finished" podID="791c3204-0b3f-4004-8871-8af969076bc2" containerID="e919b2147f1a9caeb93cf42d7b8b8d956c785385a2af621adf82cd514483f3e9" exitCode=2 Dec 10 15:42:20 crc kubenswrapper[4727]: I1210 15:42:20.211806 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" event={"ID":"791c3204-0b3f-4004-8871-8af969076bc2","Type":"ContainerDied","Data":"e919b2147f1a9caeb93cf42d7b8b8d956c785385a2af621adf82cd514483f3e9"} Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.752231 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.848766 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-inventory\") pod \"791c3204-0b3f-4004-8871-8af969076bc2\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.848957 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-ssh-key\") pod \"791c3204-0b3f-4004-8871-8af969076bc2\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.849169 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js926\" (UniqueName: \"kubernetes.io/projected/791c3204-0b3f-4004-8871-8af969076bc2-kube-api-access-js926\") pod \"791c3204-0b3f-4004-8871-8af969076bc2\" (UID: \"791c3204-0b3f-4004-8871-8af969076bc2\") " Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.865351 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791c3204-0b3f-4004-8871-8af969076bc2-kube-api-access-js926" (OuterVolumeSpecName: "kube-api-access-js926") pod "791c3204-0b3f-4004-8871-8af969076bc2" (UID: "791c3204-0b3f-4004-8871-8af969076bc2"). InnerVolumeSpecName "kube-api-access-js926". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.884530 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-inventory" (OuterVolumeSpecName: "inventory") pod "791c3204-0b3f-4004-8871-8af969076bc2" (UID: "791c3204-0b3f-4004-8871-8af969076bc2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.893495 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "791c3204-0b3f-4004-8871-8af969076bc2" (UID: "791c3204-0b3f-4004-8871-8af969076bc2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.951817 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.951851 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js926\" (UniqueName: \"kubernetes.io/projected/791c3204-0b3f-4004-8871-8af969076bc2-kube-api-access-js926\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:21 crc kubenswrapper[4727]: I1210 15:42:21.951863 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/791c3204-0b3f-4004-8871-8af969076bc2-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:22 crc kubenswrapper[4727]: I1210 15:42:22.234074 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" event={"ID":"791c3204-0b3f-4004-8871-8af969076bc2","Type":"ContainerDied","Data":"853bd594c7a72aa45ebad4e3d575418445cdad379551cf55c26feaad9e8673f0"} Dec 10 15:42:22 crc kubenswrapper[4727]: I1210 15:42:22.234427 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="853bd594c7a72aa45ebad4e3d575418445cdad379551cf55c26feaad9e8673f0" Dec 10 15:42:22 crc kubenswrapper[4727]: I1210 15:42:22.234160 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b46zs" Dec 10 15:42:30 crc kubenswrapper[4727]: E1210 15:42:30.567799 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:42:30 crc kubenswrapper[4727]: E1210 15:42:30.568859 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.378586 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-blt72"] Dec 10 15:42:40 crc kubenswrapper[4727]: E1210 15:42:40.391022 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerName="extract-utilities" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.391060 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerName="extract-utilities" Dec 10 15:42:40 crc kubenswrapper[4727]: E1210 15:42:40.391133 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerName="extract-content" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.391144 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerName="extract-content" Dec 10 15:42:40 crc kubenswrapper[4727]: E1210 15:42:40.391164 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerName="registry-server" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.391172 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerName="registry-server" Dec 10 15:42:40 crc kubenswrapper[4727]: E1210 15:42:40.391188 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791c3204-0b3f-4004-8871-8af969076bc2" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.391198 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="791c3204-0b3f-4004-8871-8af969076bc2" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.392421 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="791c3204-0b3f-4004-8871-8af969076bc2" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.392452 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc310730-512f-4993-b5de-a16b1f5d51e1" containerName="registry-server" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.397628 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.402279 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blt72"] Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.598555 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fn2w\" (UniqueName: \"kubernetes.io/projected/9a71781a-8971-4ef5-9a7c-40918bb2910c-kube-api-access-9fn2w\") pod \"community-operators-blt72\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.598974 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-utilities\") pod \"community-operators-blt72\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.599145 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-catalog-content\") pod \"community-operators-blt72\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.700046 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-utilities\") pod \"community-operators-blt72\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.700139 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-catalog-content\") pod \"community-operators-blt72\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.700324 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fn2w\" (UniqueName: \"kubernetes.io/projected/9a71781a-8971-4ef5-9a7c-40918bb2910c-kube-api-access-9fn2w\") pod \"community-operators-blt72\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.700682 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-utilities\") pod \"community-operators-blt72\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.700836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-catalog-content\") pod \"community-operators-blt72\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.722253 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fn2w\" (UniqueName: \"kubernetes.io/projected/9a71781a-8971-4ef5-9a7c-40918bb2910c-kube-api-access-9fn2w\") pod \"community-operators-blt72\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:40 crc kubenswrapper[4727]: I1210 15:42:40.749511 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:41 crc kubenswrapper[4727]: I1210 15:42:41.358933 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blt72"] Dec 10 15:42:41 crc kubenswrapper[4727]: I1210 15:42:41.469700 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blt72" event={"ID":"9a71781a-8971-4ef5-9a7c-40918bb2910c","Type":"ContainerStarted","Data":"a1038c58fca0826ed7a8d31f42583097ee646b7cfa8802448543865072cd0753"} Dec 10 15:42:42 crc kubenswrapper[4727]: I1210 15:42:42.480025 4727 generic.go:334] "Generic (PLEG): container finished" podID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerID="10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f" exitCode=0 Dec 10 15:42:42 crc kubenswrapper[4727]: I1210 15:42:42.480084 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blt72" event={"ID":"9a71781a-8971-4ef5-9a7c-40918bb2910c","Type":"ContainerDied","Data":"10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f"} Dec 10 15:42:42 crc kubenswrapper[4727]: E1210 15:42:42.564148 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:42:43 crc kubenswrapper[4727]: E1210 15:42:43.565659 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:42:44 crc kubenswrapper[4727]: I1210 15:42:44.499108 4727 generic.go:334] "Generic (PLEG): container finished" podID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerID="2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3" exitCode=0 Dec 10 15:42:44 crc kubenswrapper[4727]: I1210 15:42:44.499158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blt72" event={"ID":"9a71781a-8971-4ef5-9a7c-40918bb2910c","Type":"ContainerDied","Data":"2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3"} Dec 10 15:42:45 crc kubenswrapper[4727]: I1210 15:42:45.518251 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blt72" event={"ID":"9a71781a-8971-4ef5-9a7c-40918bb2910c","Type":"ContainerStarted","Data":"518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04"} Dec 10 15:42:45 crc kubenswrapper[4727]: I1210 15:42:45.556274 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-blt72" podStartSLOduration=2.982680154 podStartE2EDuration="5.556239083s" podCreationTimestamp="2025-12-10 15:42:40 +0000 UTC" firstStartedPulling="2025-12-10 15:42:42.482316054 +0000 UTC m=+4266.677090596" lastFinishedPulling="2025-12-10 15:42:45.055874973 +0000 UTC m=+4269.250649525" observedRunningTime="2025-12-10 15:42:45.546423445 +0000 UTC m=+4269.741197997" watchObservedRunningTime="2025-12-10 15:42:45.556239083 +0000 UTC m=+4269.751013625" Dec 10 15:42:50 crc kubenswrapper[4727]: I1210 15:42:50.750777 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:50 crc kubenswrapper[4727]: I1210 15:42:50.751330 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:50 crc kubenswrapper[4727]: I1210 15:42:50.889297 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:51 crc kubenswrapper[4727]: I1210 15:42:51.634253 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:51 crc kubenswrapper[4727]: I1210 15:42:51.688776 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blt72"] Dec 10 15:42:53 crc kubenswrapper[4727]: E1210 15:42:53.565763 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:42:53 crc kubenswrapper[4727]: I1210 15:42:53.607101 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-blt72" podUID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerName="registry-server" containerID="cri-o://518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04" gracePeriod=2 Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.146074 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.194434 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-catalog-content\") pod \"9a71781a-8971-4ef5-9a7c-40918bb2910c\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.194548 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-utilities\") pod \"9a71781a-8971-4ef5-9a7c-40918bb2910c\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.194610 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fn2w\" (UniqueName: \"kubernetes.io/projected/9a71781a-8971-4ef5-9a7c-40918bb2910c-kube-api-access-9fn2w\") pod \"9a71781a-8971-4ef5-9a7c-40918bb2910c\" (UID: \"9a71781a-8971-4ef5-9a7c-40918bb2910c\") " Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.195525 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-utilities" (OuterVolumeSpecName: "utilities") pod "9a71781a-8971-4ef5-9a7c-40918bb2910c" (UID: "9a71781a-8971-4ef5-9a7c-40918bb2910c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.201949 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a71781a-8971-4ef5-9a7c-40918bb2910c-kube-api-access-9fn2w" (OuterVolumeSpecName: "kube-api-access-9fn2w") pod "9a71781a-8971-4ef5-9a7c-40918bb2910c" (UID: "9a71781a-8971-4ef5-9a7c-40918bb2910c"). InnerVolumeSpecName "kube-api-access-9fn2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.298177 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.298233 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fn2w\" (UniqueName: \"kubernetes.io/projected/9a71781a-8971-4ef5-9a7c-40918bb2910c-kube-api-access-9fn2w\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.346386 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a71781a-8971-4ef5-9a7c-40918bb2910c" (UID: "9a71781a-8971-4ef5-9a7c-40918bb2910c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.399844 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a71781a-8971-4ef5-9a7c-40918bb2910c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.620387 4727 generic.go:334] "Generic (PLEG): container finished" podID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerID="518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04" exitCode=0 Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.620451 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blt72" event={"ID":"9a71781a-8971-4ef5-9a7c-40918bb2910c","Type":"ContainerDied","Data":"518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04"} Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.620548 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blt72" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.620578 4727 scope.go:117] "RemoveContainer" containerID="518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.620558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blt72" event={"ID":"9a71781a-8971-4ef5-9a7c-40918bb2910c","Type":"ContainerDied","Data":"a1038c58fca0826ed7a8d31f42583097ee646b7cfa8802448543865072cd0753"} Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.651264 4727 scope.go:117] "RemoveContainer" containerID="2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.653200 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blt72"] Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.664011 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-blt72"] Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.678172 4727 scope.go:117] "RemoveContainer" containerID="10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.741811 4727 scope.go:117] "RemoveContainer" containerID="518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04" Dec 10 15:42:54 crc kubenswrapper[4727]: E1210 15:42:54.742383 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04\": container with ID starting with 518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04 not found: ID does not exist" containerID="518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.742429 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04"} err="failed to get container status \"518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04\": rpc error: code = NotFound desc = could not find container \"518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04\": container with ID starting with 518ad441b2c71abf62d3882e02f8c31ae55f9f5255bb018a0dc4294ed2e87a04 not found: ID does not exist" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.742467 4727 scope.go:117] "RemoveContainer" containerID="2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3" Dec 10 15:42:54 crc kubenswrapper[4727]: E1210 15:42:54.742814 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3\": container with ID starting with 2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3 not found: ID does not exist" containerID="2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.742843 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3"} err="failed to get container status \"2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3\": rpc error: code = NotFound desc = could not find container \"2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3\": container with ID starting with 2fc2d661f4bcc3313217491ee32a42fdc9311cd06d991e4739abcd851e5d83b3 not found: ID does not exist" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.742860 4727 scope.go:117] "RemoveContainer" containerID="10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f" Dec 10 15:42:54 crc kubenswrapper[4727]: E1210 15:42:54.743126 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f\": container with ID starting with 10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f not found: ID does not exist" containerID="10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f" Dec 10 15:42:54 crc kubenswrapper[4727]: I1210 15:42:54.743151 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f"} err="failed to get container status \"10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f\": rpc error: code = NotFound desc = could not find container \"10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f\": container with ID starting with 10f4142d488f43cf656f999f51e5da6ffe53b1718cd01da914f70ffde54c417f not found: ID does not exist" Dec 10 15:42:55 crc kubenswrapper[4727]: E1210 15:42:55.565716 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:42:56 crc kubenswrapper[4727]: I1210 15:42:56.579879 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a71781a-8971-4ef5-9a7c-40918bb2910c" path="/var/lib/kubelet/pods/9a71781a-8971-4ef5-9a7c-40918bb2910c/volumes" Dec 10 15:43:07 crc kubenswrapper[4727]: E1210 15:43:07.565265 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:43:07 crc kubenswrapper[4727]: E1210 15:43:07.565287 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:43:20 crc kubenswrapper[4727]: E1210 15:43:20.566097 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:43:21 crc kubenswrapper[4727]: E1210 15:43:21.564756 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:43:31 crc kubenswrapper[4727]: I1210 15:43:31.567252 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:43:31 crc kubenswrapper[4727]: E1210 15:43:31.668351 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:43:31 crc kubenswrapper[4727]: E1210 15:43:31.668404 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:43:31 crc kubenswrapper[4727]: E1210 15:43:31.668526 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:43:31 crc kubenswrapper[4727]: E1210 15:43:31.669691 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:43:33 crc kubenswrapper[4727]: E1210 15:43:33.566233 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:43:46 crc kubenswrapper[4727]: E1210 15:43:46.571286 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:43:47 crc kubenswrapper[4727]: E1210 15:43:47.566766 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:44:00 crc kubenswrapper[4727]: E1210 15:44:00.566417 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:44:00 crc kubenswrapper[4727]: E1210 15:44:00.566930 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:44:07 crc kubenswrapper[4727]: I1210 15:44:07.724021 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:44:07 crc kubenswrapper[4727]: I1210 15:44:07.724727 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:44:12 crc kubenswrapper[4727]: E1210 15:44:12.702345 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:44:12 crc kubenswrapper[4727]: E1210 15:44:12.702868 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:44:12 crc kubenswrapper[4727]: E1210 15:44:12.703119 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:44:12 crc kubenswrapper[4727]: E1210 15:44:12.704513 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:44:14 crc kubenswrapper[4727]: E1210 15:44:14.565344 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:44:23 crc kubenswrapper[4727]: E1210 15:44:23.566083 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:44:26 crc kubenswrapper[4727]: E1210 15:44:26.573271 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.633374 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5z8n5"] Dec 10 15:44:27 crc kubenswrapper[4727]: E1210 15:44:27.634309 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerName="registry-server" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.634362 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerName="registry-server" Dec 10 15:44:27 crc kubenswrapper[4727]: E1210 15:44:27.634443 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerName="extract-content" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.634456 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerName="extract-content" Dec 10 15:44:27 crc kubenswrapper[4727]: E1210 15:44:27.634491 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerName="extract-utilities" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.634529 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerName="extract-utilities" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.635027 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a71781a-8971-4ef5-9a7c-40918bb2910c" containerName="registry-server" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.642680 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.655192 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5z8n5"] Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.774720 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5b0a32-1b49-4444-8ebc-6fc35209e8e2-catalog-content\") pod \"redhat-operators-5z8n5\" (UID: \"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2\") " pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.774850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5b0a32-1b49-4444-8ebc-6fc35209e8e2-utilities\") pod \"redhat-operators-5z8n5\" (UID: \"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2\") " pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.775022 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmw7\" (UniqueName: \"kubernetes.io/projected/5f5b0a32-1b49-4444-8ebc-6fc35209e8e2-kube-api-access-hqmw7\") pod \"redhat-operators-5z8n5\" (UID: \"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2\") " pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.920079 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5b0a32-1b49-4444-8ebc-6fc35209e8e2-catalog-content\") pod \"redhat-operators-5z8n5\" (UID: \"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2\") " pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.920244 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5b0a32-1b49-4444-8ebc-6fc35209e8e2-utilities\") pod \"redhat-operators-5z8n5\" (UID: \"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2\") " pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.920532 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmw7\" (UniqueName: \"kubernetes.io/projected/5f5b0a32-1b49-4444-8ebc-6fc35209e8e2-kube-api-access-hqmw7\") pod \"redhat-operators-5z8n5\" (UID: \"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2\") " pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.921624 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5b0a32-1b49-4444-8ebc-6fc35209e8e2-catalog-content\") pod \"redhat-operators-5z8n5\" (UID: \"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2\") " pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:27 crc kubenswrapper[4727]: I1210 15:44:27.922025 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5b0a32-1b49-4444-8ebc-6fc35209e8e2-utilities\") pod \"redhat-operators-5z8n5\" (UID: \"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2\") " pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:28 crc kubenswrapper[4727]: I1210 15:44:28.227676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmw7\" (UniqueName: \"kubernetes.io/projected/5f5b0a32-1b49-4444-8ebc-6fc35209e8e2-kube-api-access-hqmw7\") pod \"redhat-operators-5z8n5\" (UID: \"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2\") " pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:28 crc kubenswrapper[4727]: I1210 15:44:28.272382 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:28 crc kubenswrapper[4727]: I1210 15:44:28.810387 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5z8n5"] Dec 10 15:44:28 crc kubenswrapper[4727]: W1210 15:44:28.814328 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f5b0a32_1b49_4444_8ebc_6fc35209e8e2.slice/crio-4cebefa35e0c268e021f9ef61dc6903f34419736a650772463f5ed95a953c27b WatchSource:0}: Error finding container 4cebefa35e0c268e021f9ef61dc6903f34419736a650772463f5ed95a953c27b: Status 404 returned error can't find the container with id 4cebefa35e0c268e021f9ef61dc6903f34419736a650772463f5ed95a953c27b Dec 10 15:44:28 crc kubenswrapper[4727]: I1210 15:44:28.962569 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z8n5" event={"ID":"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2","Type":"ContainerStarted","Data":"4cebefa35e0c268e021f9ef61dc6903f34419736a650772463f5ed95a953c27b"} Dec 10 15:44:29 crc kubenswrapper[4727]: I1210 15:44:29.978431 4727 generic.go:334] "Generic (PLEG): container finished" podID="5f5b0a32-1b49-4444-8ebc-6fc35209e8e2" containerID="3ff3181b2aa9133f5f05b0df702559b140d1f35537628835a3151d4a7a29d02d" exitCode=0 Dec 10 15:44:29 crc kubenswrapper[4727]: I1210 15:44:29.978710 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z8n5" event={"ID":"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2","Type":"ContainerDied","Data":"3ff3181b2aa9133f5f05b0df702559b140d1f35537628835a3151d4a7a29d02d"} Dec 10 15:44:37 crc kubenswrapper[4727]: I1210 15:44:37.724451 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:44:37 crc kubenswrapper[4727]: I1210 15:44:37.725087 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:44:38 crc kubenswrapper[4727]: E1210 15:44:38.564602 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:44:39 crc kubenswrapper[4727]: I1210 15:44:39.071975 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z8n5" event={"ID":"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2","Type":"ContainerStarted","Data":"9b948536e81088fd4b703d770c926d3238567d6985c06955db90864e2699a4b6"} Dec 10 15:44:41 crc kubenswrapper[4727]: E1210 15:44:41.564353 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:44:42 crc kubenswrapper[4727]: I1210 15:44:42.116362 4727 generic.go:334] "Generic (PLEG): container finished" podID="5f5b0a32-1b49-4444-8ebc-6fc35209e8e2" containerID="9b948536e81088fd4b703d770c926d3238567d6985c06955db90864e2699a4b6" exitCode=0 Dec 10 15:44:42 crc kubenswrapper[4727]: I1210 15:44:42.116481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z8n5" event={"ID":"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2","Type":"ContainerDied","Data":"9b948536e81088fd4b703d770c926d3238567d6985c06955db90864e2699a4b6"} Dec 10 15:44:45 crc kubenswrapper[4727]: I1210 15:44:45.148294 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z8n5" event={"ID":"5f5b0a32-1b49-4444-8ebc-6fc35209e8e2","Type":"ContainerStarted","Data":"60ea9c43b92dc8381ba1ea2c5f7f7cde0863b755d2f773b59aa508de5c1bc22e"} Dec 10 15:44:45 crc kubenswrapper[4727]: I1210 15:44:45.189237 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5z8n5" podStartSLOduration=4.11570372 podStartE2EDuration="18.189206526s" podCreationTimestamp="2025-12-10 15:44:27 +0000 UTC" firstStartedPulling="2025-12-10 15:44:29.981703186 +0000 UTC m=+4374.176477738" lastFinishedPulling="2025-12-10 15:44:44.055206002 +0000 UTC m=+4388.249980544" observedRunningTime="2025-12-10 15:44:45.167704273 +0000 UTC m=+4389.362478855" watchObservedRunningTime="2025-12-10 15:44:45.189206526 +0000 UTC m=+4389.383981108" Dec 10 15:44:48 crc kubenswrapper[4727]: I1210 15:44:48.273153 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:48 crc kubenswrapper[4727]: I1210 15:44:48.273695 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:49 crc kubenswrapper[4727]: I1210 15:44:49.324543 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5z8n5" podUID="5f5b0a32-1b49-4444-8ebc-6fc35209e8e2" containerName="registry-server" probeResult="failure" output=< Dec 10 15:44:49 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 15:44:49 crc kubenswrapper[4727]: > Dec 10 15:44:53 crc kubenswrapper[4727]: E1210 15:44:53.566798 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:44:54 crc kubenswrapper[4727]: E1210 15:44:54.565430 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:44:58 crc kubenswrapper[4727]: I1210 15:44:58.336589 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:58 crc kubenswrapper[4727]: I1210 15:44:58.402571 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5z8n5" Dec 10 15:44:58 crc kubenswrapper[4727]: I1210 15:44:58.667812 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5z8n5"] Dec 10 15:44:58 crc kubenswrapper[4727]: I1210 15:44:58.840981 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72z9s"] Dec 10 15:44:58 crc kubenswrapper[4727]: I1210 15:44:58.841308 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-72z9s" podUID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerName="registry-server" containerID="cri-o://cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae" gracePeriod=2 Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.064436 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn"] Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.066215 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.069635 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.070854 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.081472 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn"] Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.083308 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.083534 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.195526 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49cgn\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.195599 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnhpp\" (UniqueName: \"kubernetes.io/projected/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-kube-api-access-xnhpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49cgn\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.195729 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49cgn\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.297235 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnhpp\" (UniqueName: \"kubernetes.io/projected/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-kube-api-access-xnhpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49cgn\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.297387 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49cgn\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.297482 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49cgn\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.430015 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49cgn\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.430503 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49cgn\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.438946 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnhpp\" (UniqueName: \"kubernetes.io/projected/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-kube-api-access-xnhpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49cgn\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.685267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:44:59 crc kubenswrapper[4727]: I1210 15:44:59.967676 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.118937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-catalog-content\") pod \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.119073 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf4b8\" (UniqueName: \"kubernetes.io/projected/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-kube-api-access-jf4b8\") pod \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.119204 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-utilities\") pod \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\" (UID: \"d3cde39b-0b7a-4fdc-83f7-a213df953bfb\") " Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.120881 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-utilities" (OuterVolumeSpecName: "utilities") pod "d3cde39b-0b7a-4fdc-83f7-a213df953bfb" (UID: "d3cde39b-0b7a-4fdc-83f7-a213df953bfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.128294 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-kube-api-access-jf4b8" (OuterVolumeSpecName: "kube-api-access-jf4b8") pod "d3cde39b-0b7a-4fdc-83f7-a213df953bfb" (UID: "d3cde39b-0b7a-4fdc-83f7-a213df953bfb"). InnerVolumeSpecName "kube-api-access-jf4b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.169097 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk"] Dec 10 15:45:00 crc kubenswrapper[4727]: E1210 15:45:00.169813 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerName="extract-utilities" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.169838 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerName="extract-utilities" Dec 10 15:45:00 crc kubenswrapper[4727]: E1210 15:45:00.169853 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerName="registry-server" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.169862 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerName="registry-server" Dec 10 15:45:00 crc kubenswrapper[4727]: E1210 15:45:00.169899 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerName="extract-content" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.169925 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerName="extract-content" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.170199 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerName="registry-server" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.171510 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.174573 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.174720 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.181784 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk"] Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.223121 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.223171 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf4b8\" (UniqueName: \"kubernetes.io/projected/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-kube-api-access-jf4b8\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.276517 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3cde39b-0b7a-4fdc-83f7-a213df953bfb" (UID: "d3cde39b-0b7a-4fdc-83f7-a213df953bfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.325721 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9cb\" (UniqueName: \"kubernetes.io/projected/93d8ad52-358e-4836-bcb1-acfc9685104c-kube-api-access-5l9cb\") pod \"collect-profiles-29423025-d8jvk\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.325891 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93d8ad52-358e-4836-bcb1-acfc9685104c-config-volume\") pod \"collect-profiles-29423025-d8jvk\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.326002 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93d8ad52-358e-4836-bcb1-acfc9685104c-secret-volume\") pod \"collect-profiles-29423025-d8jvk\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.326134 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3cde39b-0b7a-4fdc-83f7-a213df953bfb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.335721 4727 generic.go:334] "Generic (PLEG): container finished" podID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" containerID="cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae" exitCode=0 Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.335774 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72z9s" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.335797 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72z9s" event={"ID":"d3cde39b-0b7a-4fdc-83f7-a213df953bfb","Type":"ContainerDied","Data":"cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae"} Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.335840 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72z9s" event={"ID":"d3cde39b-0b7a-4fdc-83f7-a213df953bfb","Type":"ContainerDied","Data":"37cffbe9788ccff4731be38c02531e63caeef611b5a51b0f7a32aad33a84ddb9"} Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.335864 4727 scope.go:117] "RemoveContainer" containerID="cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.382863 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn"] Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.390589 4727 scope.go:117] "RemoveContainer" containerID="905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.429621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l9cb\" (UniqueName: \"kubernetes.io/projected/93d8ad52-358e-4836-bcb1-acfc9685104c-kube-api-access-5l9cb\") pod \"collect-profiles-29423025-d8jvk\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.429970 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93d8ad52-358e-4836-bcb1-acfc9685104c-config-volume\") pod \"collect-profiles-29423025-d8jvk\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.430011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93d8ad52-358e-4836-bcb1-acfc9685104c-secret-volume\") pod \"collect-profiles-29423025-d8jvk\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.431837 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93d8ad52-358e-4836-bcb1-acfc9685104c-config-volume\") pod \"collect-profiles-29423025-d8jvk\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.434331 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93d8ad52-358e-4836-bcb1-acfc9685104c-secret-volume\") pod \"collect-profiles-29423025-d8jvk\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.449080 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72z9s"] Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.453924 4727 scope.go:117] "RemoveContainer" containerID="f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.455426 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l9cb\" (UniqueName: \"kubernetes.io/projected/93d8ad52-358e-4836-bcb1-acfc9685104c-kube-api-access-5l9cb\") pod \"collect-profiles-29423025-d8jvk\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.463163 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-72z9s"] Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.491427 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.591769 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3cde39b-0b7a-4fdc-83f7-a213df953bfb" path="/var/lib/kubelet/pods/d3cde39b-0b7a-4fdc-83f7-a213df953bfb/volumes" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.644217 4727 scope.go:117] "RemoveContainer" containerID="cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae" Dec 10 15:45:00 crc kubenswrapper[4727]: E1210 15:45:00.646440 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae\": container with ID starting with cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae not found: ID does not exist" containerID="cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.646498 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae"} err="failed to get container status \"cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae\": rpc error: code = NotFound desc = could not find container \"cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae\": container with ID starting with cd352b52e93569947f17893fe8e5cf5b01ce08504007e733f409b1b332137bae not found: ID does not exist" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.646532 4727 scope.go:117] "RemoveContainer" containerID="905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322" Dec 10 15:45:00 crc kubenswrapper[4727]: E1210 15:45:00.652244 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322\": container with ID starting with 905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322 not found: ID does not exist" containerID="905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.652306 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322"} err="failed to get container status \"905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322\": rpc error: code = NotFound desc = could not find container \"905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322\": container with ID starting with 905bbd6eb1418afdb4b260ada312213d805521cddebecdd9f8585b937f574322 not found: ID does not exist" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.652341 4727 scope.go:117] "RemoveContainer" containerID="f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd" Dec 10 15:45:00 crc kubenswrapper[4727]: E1210 15:45:00.652959 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd\": container with ID starting with f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd not found: ID does not exist" containerID="f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.653015 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd"} err="failed to get container status \"f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd\": rpc error: code = NotFound desc = could not find container \"f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd\": container with ID starting with f37003819559607ba15555364342966729ad9bbccfeec3d347408093aa61f1fd not found: ID does not exist" Dec 10 15:45:00 crc kubenswrapper[4727]: I1210 15:45:00.980183 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk"] Dec 10 15:45:01 crc kubenswrapper[4727]: I1210 15:45:01.347614 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" event={"ID":"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9","Type":"ContainerStarted","Data":"1ffdf8e55db230f3e3baf8c05e5474c5453fa8b2daab728b3d7abeff4e7c6079"} Dec 10 15:45:01 crc kubenswrapper[4727]: W1210 15:45:01.632070 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93d8ad52_358e_4836_bcb1_acfc9685104c.slice/crio-135bf9245ee2ccb7bab436be1547c4bde2deb6ce6af9a678ef5729eac392d008 WatchSource:0}: Error finding container 135bf9245ee2ccb7bab436be1547c4bde2deb6ce6af9a678ef5729eac392d008: Status 404 returned error can't find the container with id 135bf9245ee2ccb7bab436be1547c4bde2deb6ce6af9a678ef5729eac392d008 Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.107767 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2"] Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.115744 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.118559 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.124597 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2"] Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.267647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trt6k\" (UniqueName: \"kubernetes.io/projected/af932d9d-d878-4924-a389-19fea975fe84-kube-api-access-trt6k\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.267803 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.267853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.358943 4727 generic.go:334] "Generic (PLEG): container finished" podID="93d8ad52-358e-4836-bcb1-acfc9685104c" containerID="ca4b3c2a822c922edd455f06a87a88460a194324df81dc9f16250e5083388333" exitCode=0 Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.359040 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" event={"ID":"93d8ad52-358e-4836-bcb1-acfc9685104c","Type":"ContainerDied","Data":"ca4b3c2a822c922edd455f06a87a88460a194324df81dc9f16250e5083388333"} Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.359230 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" event={"ID":"93d8ad52-358e-4836-bcb1-acfc9685104c","Type":"ContainerStarted","Data":"135bf9245ee2ccb7bab436be1547c4bde2deb6ce6af9a678ef5729eac392d008"} Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.360487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" event={"ID":"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9","Type":"ContainerStarted","Data":"2cb1d04611b2892e22970d3b2ee5b14f2f4653a0163201085dbd9431e494602c"} Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.369817 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.369873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.370016 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trt6k\" (UniqueName: \"kubernetes.io/projected/af932d9d-d878-4924-a389-19fea975fe84-kube-api-access-trt6k\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.370349 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.370388 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.391161 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trt6k\" (UniqueName: \"kubernetes.io/projected/af932d9d-d878-4924-a389-19fea975fe84-kube-api-access-trt6k\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.413541 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" podStartSLOduration=2.157243254 podStartE2EDuration="3.413515213s" podCreationTimestamp="2025-12-10 15:44:59 +0000 UTC" firstStartedPulling="2025-12-10 15:45:00.390606493 +0000 UTC m=+4404.585381035" lastFinishedPulling="2025-12-10 15:45:01.646878452 +0000 UTC m=+4405.841652994" observedRunningTime="2025-12-10 15:45:02.403720896 +0000 UTC m=+4406.598495428" watchObservedRunningTime="2025-12-10 15:45:02.413515213 +0000 UTC m=+4406.608289745" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.443856 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:02 crc kubenswrapper[4727]: I1210 15:45:02.998487 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2"] Dec 10 15:45:03 crc kubenswrapper[4727]: I1210 15:45:03.371656 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" event={"ID":"af932d9d-d878-4924-a389-19fea975fe84","Type":"ContainerStarted","Data":"8cbed3f839e4934864a5c8094a608b73b85041b6307916dc2c82dd86f46d0c7b"} Dec 10 15:45:03 crc kubenswrapper[4727]: I1210 15:45:03.372024 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" event={"ID":"af932d9d-d878-4924-a389-19fea975fe84","Type":"ContainerStarted","Data":"ba0a7f55b639bf0c9498b467da8e6b66e3ba21221e704101f8f8f3f9ed2c4222"} Dec 10 15:45:03 crc kubenswrapper[4727]: I1210 15:45:03.830382 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:03 crc kubenswrapper[4727]: I1210 15:45:03.910222 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93d8ad52-358e-4836-bcb1-acfc9685104c-secret-volume\") pod \"93d8ad52-358e-4836-bcb1-acfc9685104c\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " Dec 10 15:45:03 crc kubenswrapper[4727]: I1210 15:45:03.910852 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l9cb\" (UniqueName: \"kubernetes.io/projected/93d8ad52-358e-4836-bcb1-acfc9685104c-kube-api-access-5l9cb\") pod \"93d8ad52-358e-4836-bcb1-acfc9685104c\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " Dec 10 15:45:03 crc kubenswrapper[4727]: I1210 15:45:03.910936 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93d8ad52-358e-4836-bcb1-acfc9685104c-config-volume\") pod \"93d8ad52-358e-4836-bcb1-acfc9685104c\" (UID: \"93d8ad52-358e-4836-bcb1-acfc9685104c\") " Dec 10 15:45:03 crc kubenswrapper[4727]: I1210 15:45:03.912315 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d8ad52-358e-4836-bcb1-acfc9685104c-config-volume" (OuterVolumeSpecName: "config-volume") pod "93d8ad52-358e-4836-bcb1-acfc9685104c" (UID: "93d8ad52-358e-4836-bcb1-acfc9685104c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:03 crc kubenswrapper[4727]: I1210 15:45:03.916658 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d8ad52-358e-4836-bcb1-acfc9685104c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "93d8ad52-358e-4836-bcb1-acfc9685104c" (UID: "93d8ad52-358e-4836-bcb1-acfc9685104c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:03 crc kubenswrapper[4727]: I1210 15:45:03.917152 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d8ad52-358e-4836-bcb1-acfc9685104c-kube-api-access-5l9cb" (OuterVolumeSpecName: "kube-api-access-5l9cb") pod "93d8ad52-358e-4836-bcb1-acfc9685104c" (UID: "93d8ad52-358e-4836-bcb1-acfc9685104c"). InnerVolumeSpecName "kube-api-access-5l9cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.014306 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l9cb\" (UniqueName: \"kubernetes.io/projected/93d8ad52-358e-4836-bcb1-acfc9685104c-kube-api-access-5l9cb\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.014343 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93d8ad52-358e-4836-bcb1-acfc9685104c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.014357 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93d8ad52-358e-4836-bcb1-acfc9685104c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.384174 4727 generic.go:334] "Generic (PLEG): container finished" podID="af932d9d-d878-4924-a389-19fea975fe84" containerID="8cbed3f839e4934864a5c8094a608b73b85041b6307916dc2c82dd86f46d0c7b" exitCode=0 Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.384619 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" event={"ID":"af932d9d-d878-4924-a389-19fea975fe84","Type":"ContainerDied","Data":"8cbed3f839e4934864a5c8094a608b73b85041b6307916dc2c82dd86f46d0c7b"} Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.389637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" event={"ID":"93d8ad52-358e-4836-bcb1-acfc9685104c","Type":"ContainerDied","Data":"135bf9245ee2ccb7bab436be1547c4bde2deb6ce6af9a678ef5729eac392d008"} Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.389679 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="135bf9245ee2ccb7bab436be1547c4bde2deb6ce6af9a678ef5729eac392d008" Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.389902 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-d8jvk" Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.911880 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x"] Dec 10 15:45:04 crc kubenswrapper[4727]: I1210 15:45:04.922486 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-qfq9x"] Dec 10 15:45:05 crc kubenswrapper[4727]: E1210 15:45:05.565037 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:45:06 crc kubenswrapper[4727]: I1210 15:45:06.412514 4727 generic.go:334] "Generic (PLEG): container finished" podID="af932d9d-d878-4924-a389-19fea975fe84" containerID="4bc4e7ea701fe3ee4c584d394b818f296d41e121e047e73f395558a484acce55" exitCode=0 Dec 10 15:45:06 crc kubenswrapper[4727]: I1210 15:45:06.412578 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" event={"ID":"af932d9d-d878-4924-a389-19fea975fe84","Type":"ContainerDied","Data":"4bc4e7ea701fe3ee4c584d394b818f296d41e121e047e73f395558a484acce55"} Dec 10 15:45:06 crc kubenswrapper[4727]: I1210 15:45:06.578118 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ce4724-56ea-4220-ab50-7f83b039cd49" path="/var/lib/kubelet/pods/63ce4724-56ea-4220-ab50-7f83b039cd49/volumes" Dec 10 15:45:07 crc kubenswrapper[4727]: I1210 15:45:07.428615 4727 generic.go:334] "Generic (PLEG): container finished" podID="af932d9d-d878-4924-a389-19fea975fe84" containerID="2aafa635a76564bcc76601c1581cac1f8f5d8f705b578997e4d843dc60baa96d" exitCode=0 Dec 10 15:45:07 crc kubenswrapper[4727]: I1210 15:45:07.428792 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" event={"ID":"af932d9d-d878-4924-a389-19fea975fe84","Type":"ContainerDied","Data":"2aafa635a76564bcc76601c1581cac1f8f5d8f705b578997e4d843dc60baa96d"} Dec 10 15:45:07 crc kubenswrapper[4727]: E1210 15:45:07.564655 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:45:07 crc kubenswrapper[4727]: I1210 15:45:07.723937 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:45:07 crc kubenswrapper[4727]: I1210 15:45:07.724024 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:45:07 crc kubenswrapper[4727]: I1210 15:45:07.724084 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:45:07 crc kubenswrapper[4727]: I1210 15:45:07.725602 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19871bb1326002a5b787fe9cdab6775b032466dfefd63b7f0ba220e2030b241f"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:45:07 crc kubenswrapper[4727]: I1210 15:45:07.725794 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://19871bb1326002a5b787fe9cdab6775b032466dfefd63b7f0ba220e2030b241f" gracePeriod=600 Dec 10 15:45:08 crc kubenswrapper[4727]: I1210 15:45:08.442136 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="19871bb1326002a5b787fe9cdab6775b032466dfefd63b7f0ba220e2030b241f" exitCode=0 Dec 10 15:45:08 crc kubenswrapper[4727]: I1210 15:45:08.442226 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"19871bb1326002a5b787fe9cdab6775b032466dfefd63b7f0ba220e2030b241f"} Dec 10 15:45:08 crc kubenswrapper[4727]: I1210 15:45:08.442515 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022"} Dec 10 15:45:08 crc kubenswrapper[4727]: I1210 15:45:08.442547 4727 scope.go:117] "RemoveContainer" containerID="1a5a8e59c915395bf562390e6e3f450b919a58f61eb8f2184d19a5163239619b" Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.008717 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.138696 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trt6k\" (UniqueName: \"kubernetes.io/projected/af932d9d-d878-4924-a389-19fea975fe84-kube-api-access-trt6k\") pod \"af932d9d-d878-4924-a389-19fea975fe84\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.138788 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-bundle\") pod \"af932d9d-d878-4924-a389-19fea975fe84\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.138832 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-util\") pod \"af932d9d-d878-4924-a389-19fea975fe84\" (UID: \"af932d9d-d878-4924-a389-19fea975fe84\") " Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.139771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-bundle" (OuterVolumeSpecName: "bundle") pod "af932d9d-d878-4924-a389-19fea975fe84" (UID: "af932d9d-d878-4924-a389-19fea975fe84"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.156423 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af932d9d-d878-4924-a389-19fea975fe84-kube-api-access-trt6k" (OuterVolumeSpecName: "kube-api-access-trt6k") pod "af932d9d-d878-4924-a389-19fea975fe84" (UID: "af932d9d-d878-4924-a389-19fea975fe84"). InnerVolumeSpecName "kube-api-access-trt6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.241930 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trt6k\" (UniqueName: \"kubernetes.io/projected/af932d9d-d878-4924-a389-19fea975fe84-kube-api-access-trt6k\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.241963 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.373305 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-util" (OuterVolumeSpecName: "util") pod "af932d9d-d878-4924-a389-19fea975fe84" (UID: "af932d9d-d878-4924-a389-19fea975fe84"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.446106 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af932d9d-d878-4924-a389-19fea975fe84-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.457006 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" event={"ID":"af932d9d-d878-4924-a389-19fea975fe84","Type":"ContainerDied","Data":"ba0a7f55b639bf0c9498b467da8e6b66e3ba21221e704101f8f8f3f9ed2c4222"} Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.457052 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba0a7f55b639bf0c9498b467da8e6b66e3ba21221e704101f8f8f3f9ed2c4222" Dec 10 15:45:09 crc kubenswrapper[4727]: I1210 15:45:09.457085 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2" Dec 10 15:45:17 crc kubenswrapper[4727]: E1210 15:45:17.573342 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.640492 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt"] Dec 10 15:45:17 crc kubenswrapper[4727]: E1210 15:45:17.641006 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af932d9d-d878-4924-a389-19fea975fe84" containerName="extract" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.641028 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="af932d9d-d878-4924-a389-19fea975fe84" containerName="extract" Dec 10 15:45:17 crc kubenswrapper[4727]: E1210 15:45:17.641065 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d8ad52-358e-4836-bcb1-acfc9685104c" containerName="collect-profiles" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.641073 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d8ad52-358e-4836-bcb1-acfc9685104c" containerName="collect-profiles" Dec 10 15:45:17 crc kubenswrapper[4727]: E1210 15:45:17.641088 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af932d9d-d878-4924-a389-19fea975fe84" containerName="util" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.641096 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="af932d9d-d878-4924-a389-19fea975fe84" containerName="util" Dec 10 15:45:17 crc kubenswrapper[4727]: E1210 15:45:17.641112 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af932d9d-d878-4924-a389-19fea975fe84" containerName="pull" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.641119 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="af932d9d-d878-4924-a389-19fea975fe84" containerName="pull" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.641323 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d8ad52-358e-4836-bcb1-acfc9685104c" containerName="collect-profiles" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.641338 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="af932d9d-d878-4924-a389-19fea975fe84" containerName="extract" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.644368 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.669807 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt"] Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.739825 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/620908f6-27df-4ab9-a272-5a3bf56b2e81-manager-config\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.739897 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-apiservice-cert\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.740017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-webhook-cert\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.740048 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.740111 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m62lr\" (UniqueName: \"kubernetes.io/projected/620908f6-27df-4ab9-a272-5a3bf56b2e81-kube-api-access-m62lr\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.841999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/620908f6-27df-4ab9-a272-5a3bf56b2e81-manager-config\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.842078 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-apiservice-cert\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.842176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-webhook-cert\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.842209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.842262 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m62lr\" (UniqueName: \"kubernetes.io/projected/620908f6-27df-4ab9-a272-5a3bf56b2e81-kube-api-access-m62lr\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.843756 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/620908f6-27df-4ab9-a272-5a3bf56b2e81-manager-config\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.850964 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.851595 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-webhook-cert\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.852302 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-apiservice-cert\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.872001 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m62lr\" (UniqueName: \"kubernetes.io/projected/620908f6-27df-4ab9-a272-5a3bf56b2e81-kube-api-access-m62lr\") pod \"loki-operator-controller-manager-767689bfb5-7cvkt\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:17 crc kubenswrapper[4727]: I1210 15:45:17.967963 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:18 crc kubenswrapper[4727]: I1210 15:45:18.554024 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt"] Dec 10 15:45:18 crc kubenswrapper[4727]: W1210 15:45:18.554517 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod620908f6_27df_4ab9_a272_5a3bf56b2e81.slice/crio-c9bd00b524fbd26d711db087bc29ee105c3ef71ad064ca96276fffaed8cf5d22 WatchSource:0}: Error finding container c9bd00b524fbd26d711db087bc29ee105c3ef71ad064ca96276fffaed8cf5d22: Status 404 returned error can't find the container with id c9bd00b524fbd26d711db087bc29ee105c3ef71ad064ca96276fffaed8cf5d22 Dec 10 15:45:18 crc kubenswrapper[4727]: E1210 15:45:18.570335 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:45:19 crc kubenswrapper[4727]: I1210 15:45:19.554864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" event={"ID":"620908f6-27df-4ab9-a272-5a3bf56b2e81","Type":"ContainerStarted","Data":"c9bd00b524fbd26d711db087bc29ee105c3ef71ad064ca96276fffaed8cf5d22"} Dec 10 15:45:23 crc kubenswrapper[4727]: I1210 15:45:23.333737 4727 scope.go:117] "RemoveContainer" containerID="e87c35bd13e754165157b99bec5fb308dcd1345813a728d6bc67b5471d1ee80a" Dec 10 15:45:28 crc kubenswrapper[4727]: I1210 15:45:28.658466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" event={"ID":"620908f6-27df-4ab9-a272-5a3bf56b2e81","Type":"ContainerStarted","Data":"72e10766490a77b74e4c88810cf29a6ede0f5aa7806c7b57950c7dd4650b544f"} Dec 10 15:45:28 crc kubenswrapper[4727]: I1210 15:45:28.658960 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" event={"ID":"620908f6-27df-4ab9-a272-5a3bf56b2e81","Type":"ContainerStarted","Data":"889b6b0ee3858397d2ecb8d7c16ef0a44f29c29044a7a1147d6af3114da80261"} Dec 10 15:45:28 crc kubenswrapper[4727]: I1210 15:45:28.659004 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:28 crc kubenswrapper[4727]: I1210 15:45:28.691160 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" podStartSLOduration=2.707958992 podStartE2EDuration="11.691136295s" podCreationTimestamp="2025-12-10 15:45:17 +0000 UTC" firstStartedPulling="2025-12-10 15:45:18.557890653 +0000 UTC m=+4422.752665185" lastFinishedPulling="2025-12-10 15:45:27.541067946 +0000 UTC m=+4431.735842488" observedRunningTime="2025-12-10 15:45:28.677137552 +0000 UTC m=+4432.871912104" watchObservedRunningTime="2025-12-10 15:45:28.691136295 +0000 UTC m=+4432.885910847" Dec 10 15:45:30 crc kubenswrapper[4727]: E1210 15:45:30.566381 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:45:33 crc kubenswrapper[4727]: E1210 15:45:33.565833 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:45:37 crc kubenswrapper[4727]: I1210 15:45:37.971032 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:38 crc kubenswrapper[4727]: I1210 15:45:38.048172 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp"] Dec 10 15:45:38 crc kubenswrapper[4727]: I1210 15:45:38.048712 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" podUID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerName="manager" containerID="cri-o://fcd551065edc865d7cff5d62a80941242ff2d7221823c93105bc5f97b1590008" gracePeriod=10 Dec 10 15:45:38 crc kubenswrapper[4727]: I1210 15:45:38.048805 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" podUID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerName="kube-rbac-proxy" containerID="cri-o://a43465958679d0bd79aa6777f6d062c864e4b39d7869ed450c2cb6e8410d46fe" gracePeriod=10 Dec 10 15:45:38 crc kubenswrapper[4727]: I1210 15:45:38.768501 4727 generic.go:334] "Generic (PLEG): container finished" podID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerID="a43465958679d0bd79aa6777f6d062c864e4b39d7869ed450c2cb6e8410d46fe" exitCode=0 Dec 10 15:45:38 crc kubenswrapper[4727]: I1210 15:45:38.768846 4727 generic.go:334] "Generic (PLEG): container finished" podID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerID="fcd551065edc865d7cff5d62a80941242ff2d7221823c93105bc5f97b1590008" exitCode=0 Dec 10 15:45:38 crc kubenswrapper[4727]: I1210 15:45:38.768675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" event={"ID":"faf1ebd4-74be-4049-b5cb-26e049d50e6a","Type":"ContainerDied","Data":"a43465958679d0bd79aa6777f6d062c864e4b39d7869ed450c2cb6e8410d46fe"} Dec 10 15:45:38 crc kubenswrapper[4727]: I1210 15:45:38.768893 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" event={"ID":"faf1ebd4-74be-4049-b5cb-26e049d50e6a","Type":"ContainerDied","Data":"fcd551065edc865d7cff5d62a80941242ff2d7221823c93105bc5f97b1590008"} Dec 10 15:45:38 crc kubenswrapper[4727]: I1210 15:45:38.949740 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.114944 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-webhook-cert\") pod \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.115053 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-loki-operator-metrics-cert\") pod \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.115143 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/faf1ebd4-74be-4049-b5cb-26e049d50e6a-manager-config\") pod \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.115208 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb7x9\" (UniqueName: \"kubernetes.io/projected/faf1ebd4-74be-4049-b5cb-26e049d50e6a-kube-api-access-kb7x9\") pod \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.115264 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-apiservice-cert\") pod \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\" (UID: \"faf1ebd4-74be-4049-b5cb-26e049d50e6a\") " Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.121976 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-loki-operator-metrics-cert" (OuterVolumeSpecName: "loki-operator-metrics-cert") pod "faf1ebd4-74be-4049-b5cb-26e049d50e6a" (UID: "faf1ebd4-74be-4049-b5cb-26e049d50e6a"). InnerVolumeSpecName "loki-operator-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.126207 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "faf1ebd4-74be-4049-b5cb-26e049d50e6a" (UID: "faf1ebd4-74be-4049-b5cb-26e049d50e6a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.126841 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf1ebd4-74be-4049-b5cb-26e049d50e6a-kube-api-access-kb7x9" (OuterVolumeSpecName: "kube-api-access-kb7x9") pod "faf1ebd4-74be-4049-b5cb-26e049d50e6a" (UID: "faf1ebd4-74be-4049-b5cb-26e049d50e6a"). InnerVolumeSpecName "kube-api-access-kb7x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.140046 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "faf1ebd4-74be-4049-b5cb-26e049d50e6a" (UID: "faf1ebd4-74be-4049-b5cb-26e049d50e6a"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.213496 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf1ebd4-74be-4049-b5cb-26e049d50e6a-manager-config" (OuterVolumeSpecName: "manager-config") pod "faf1ebd4-74be-4049-b5cb-26e049d50e6a" (UID: "faf1ebd4-74be-4049-b5cb-26e049d50e6a"). InnerVolumeSpecName "manager-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.218152 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.218181 4727 reconciler_common.go:293] "Volume detached for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-loki-operator-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.218193 4727 reconciler_common.go:293] "Volume detached for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/faf1ebd4-74be-4049-b5cb-26e049d50e6a-manager-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.218202 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb7x9\" (UniqueName: \"kubernetes.io/projected/faf1ebd4-74be-4049-b5cb-26e049d50e6a-kube-api-access-kb7x9\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.218210 4727 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/faf1ebd4-74be-4049-b5cb-26e049d50e6a-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.783792 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" event={"ID":"faf1ebd4-74be-4049-b5cb-26e049d50e6a","Type":"ContainerDied","Data":"356950bf26080e66b66617621e641e897f162a5c8b9abe4e3d0003aa8bbd375f"} Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.784160 4727 scope.go:117] "RemoveContainer" containerID="a43465958679d0bd79aa6777f6d062c864e4b39d7869ed450c2cb6e8410d46fe" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.783865 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.816330 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp"] Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.823122 4727 scope.go:117] "RemoveContainer" containerID="fcd551065edc865d7cff5d62a80941242ff2d7221823c93105bc5f97b1590008" Dec 10 15:45:39 crc kubenswrapper[4727]: I1210 15:45:39.826305 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-654644598b-sl6jp"] Dec 10 15:45:40 crc kubenswrapper[4727]: I1210 15:45:40.577474 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" path="/var/lib/kubelet/pods/faf1ebd4-74be-4049-b5cb-26e049d50e6a/volumes" Dec 10 15:45:42 crc kubenswrapper[4727]: E1210 15:45:42.567123 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.190674 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd"] Dec 10 15:45:44 crc kubenswrapper[4727]: E1210 15:45:44.191461 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerName="kube-rbac-proxy" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.191474 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerName="kube-rbac-proxy" Dec 10 15:45:44 crc kubenswrapper[4727]: E1210 15:45:44.191531 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerName="manager" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.191537 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerName="manager" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.191734 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerName="kube-rbac-proxy" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.191762 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf1ebd4-74be-4049-b5cb-26e049d50e6a" containerName="manager" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.193143 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.213414 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd"] Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.303706 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df5cd708-bc5a-4188-84d4-10f25154053d-webhook-cert\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.303771 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df5cd708-bc5a-4188-84d4-10f25154053d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.303828 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/df5cd708-bc5a-4188-84d4-10f25154053d-manager-config\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.304021 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df5cd708-bc5a-4188-84d4-10f25154053d-apiservice-cert\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.304049 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskp7\" (UniqueName: \"kubernetes.io/projected/df5cd708-bc5a-4188-84d4-10f25154053d-kube-api-access-jskp7\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.406807 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df5cd708-bc5a-4188-84d4-10f25154053d-apiservice-cert\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.406881 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jskp7\" (UniqueName: \"kubernetes.io/projected/df5cd708-bc5a-4188-84d4-10f25154053d-kube-api-access-jskp7\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.407020 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df5cd708-bc5a-4188-84d4-10f25154053d-webhook-cert\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.407061 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df5cd708-bc5a-4188-84d4-10f25154053d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.407119 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/df5cd708-bc5a-4188-84d4-10f25154053d-manager-config\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.408236 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/df5cd708-bc5a-4188-84d4-10f25154053d-manager-config\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.413519 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df5cd708-bc5a-4188-84d4-10f25154053d-apiservice-cert\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.415223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df5cd708-bc5a-4188-84d4-10f25154053d-webhook-cert\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.419594 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df5cd708-bc5a-4188-84d4-10f25154053d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.435594 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jskp7\" (UniqueName: \"kubernetes.io/projected/df5cd708-bc5a-4188-84d4-10f25154053d-kube-api-access-jskp7\") pod \"loki-operator-controller-manager-77d49cfc99-tqvhd\" (UID: \"df5cd708-bc5a-4188-84d4-10f25154053d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:44 crc kubenswrapper[4727]: I1210 15:45:44.516872 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:45 crc kubenswrapper[4727]: I1210 15:45:45.018893 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd"] Dec 10 15:45:45 crc kubenswrapper[4727]: E1210 15:45:45.568190 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:45:45 crc kubenswrapper[4727]: I1210 15:45:45.850717 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" event={"ID":"df5cd708-bc5a-4188-84d4-10f25154053d","Type":"ContainerStarted","Data":"37675c588cfc95de7113a66dc62d8a18b7e91eb8e78a2f25e4da8ee9c95ca5af"} Dec 10 15:45:45 crc kubenswrapper[4727]: I1210 15:45:45.851066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" event={"ID":"df5cd708-bc5a-4188-84d4-10f25154053d","Type":"ContainerStarted","Data":"7c31989dc2e87fbdbbb75099e024f55b4aeaa091b41b341dbd351e41eeef56cb"} Dec 10 15:45:45 crc kubenswrapper[4727]: I1210 15:45:45.851083 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:45 crc kubenswrapper[4727]: I1210 15:45:45.851093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" event={"ID":"df5cd708-bc5a-4188-84d4-10f25154053d","Type":"ContainerStarted","Data":"2f8a82810f4799538ee2ae1d8a41c540c288a8c609b97f4323c01d204cddde39"} Dec 10 15:45:45 crc kubenswrapper[4727]: I1210 15:45:45.874529 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" podStartSLOduration=1.87451109 podStartE2EDuration="1.87451109s" podCreationTimestamp="2025-12-10 15:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:45.871766381 +0000 UTC m=+4450.066540933" watchObservedRunningTime="2025-12-10 15:45:45.87451109 +0000 UTC m=+4450.069285632" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.453606 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs"] Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.456277 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.479945 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs"] Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.519450 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-77d49cfc99-tqvhd" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.590603 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2a7f948-3565-42cf-81ed-c16f1b770019-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.590802 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2a7f948-3565-42cf-81ed-c16f1b770019-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.591043 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmmk\" (UniqueName: \"kubernetes.io/projected/b2a7f948-3565-42cf-81ed-c16f1b770019-kube-api-access-zfmmk\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.591175 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b2a7f948-3565-42cf-81ed-c16f1b770019-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.592426 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/b2a7f948-3565-42cf-81ed-c16f1b770019-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.599175 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.599745 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" containerID="cri-o://94bbc5178158254c69445da8de8b9cb648167455f1604df38821cd96546171bc" gracePeriod=30 Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.635281 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt"] Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.635605 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" podUID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerName="manager" containerID="cri-o://889b6b0ee3858397d2ecb8d7c16ef0a44f29c29044a7a1147d6af3114da80261" gracePeriod=10 Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.636072 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" podUID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerName="kube-rbac-proxy" containerID="cri-o://72e10766490a77b74e4c88810cf29a6ede0f5aa7806c7b57950c7dd4650b544f" gracePeriod=10 Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.667809 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t"] Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.669593 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.692524 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.692862 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" containerName="loki-compactor" containerID="cri-o://6c32b0d6e5e6077b749e9c0363ea9cf18f2d35e78f9a420832ce9108a8f6a5f2" gracePeriod=30 Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.695796 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708122 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708231 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708265 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2a7f948-3565-42cf-81ed-c16f1b770019-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708311 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2a7f948-3565-42cf-81ed-c16f1b770019-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708418 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfmmk\" (UniqueName: \"kubernetes.io/projected/b2a7f948-3565-42cf-81ed-c16f1b770019-kube-api-access-zfmmk\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708448 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b2a7f948-3565-42cf-81ed-c16f1b770019-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708484 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708507 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/b2a7f948-3565-42cf-81ed-c16f1b770019-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708525 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh2mr\" (UniqueName: \"kubernetes.io/projected/0f171f74-b9e1-42a3-9907-4d34dabee6c2-kube-api-access-lh2mr\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.708572 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f171f74-b9e1-42a3-9907-4d34dabee6c2-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.709808 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2a7f948-3565-42cf-81ed-c16f1b770019-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.710382 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2a7f948-3565-42cf-81ed-c16f1b770019-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.724495 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/b2a7f948-3565-42cf-81ed-c16f1b770019-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.724561 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t"] Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.729336 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b2a7f948-3565-42cf-81ed-c16f1b770019-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.737991 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfmmk\" (UniqueName: \"kubernetes.io/projected/b2a7f948-3565-42cf-81ed-c16f1b770019-kube-api-access-zfmmk\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-q5hvs\" (UID: \"b2a7f948-3565-42cf-81ed-c16f1b770019\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.751073 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld"] Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.754279 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.792840 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.807971 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld"] Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810349 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/680c351c-7945-4752-83ce-8cdd827cf4e5-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-642vv\" (UniqueName: \"kubernetes.io/projected/680c351c-7945-4752-83ce-8cdd827cf4e5-kube-api-access-642vv\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810432 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/680c351c-7945-4752-83ce-8cdd827cf4e5-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810457 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/680c351c-7945-4752-83ce-8cdd827cf4e5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810480 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810596 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh2mr\" (UniqueName: \"kubernetes.io/projected/0f171f74-b9e1-42a3-9907-4d34dabee6c2-kube-api-access-lh2mr\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810619 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680c351c-7945-4752-83ce-8cdd827cf4e5-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.810649 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f171f74-b9e1-42a3-9907-4d34dabee6c2-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.815857 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f171f74-b9e1-42a3-9907-4d34dabee6c2-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.815874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.818848 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.819149 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.819265 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/0f171f74-b9e1-42a3-9907-4d34dabee6c2-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.833169 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh2mr\" (UniqueName: \"kubernetes.io/projected/0f171f74-b9e1-42a3-9907-4d34dabee6c2-kube-api-access-lh2mr\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-tbx4t\" (UID: \"0f171f74-b9e1-42a3-9907-4d34dabee6c2\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.917861 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680c351c-7945-4752-83ce-8cdd827cf4e5-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.920551 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680c351c-7945-4752-83ce-8cdd827cf4e5-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.918345 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/680c351c-7945-4752-83ce-8cdd827cf4e5-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.920673 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-642vv\" (UniqueName: \"kubernetes.io/projected/680c351c-7945-4752-83ce-8cdd827cf4e5-kube-api-access-642vv\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.920754 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/680c351c-7945-4752-83ce-8cdd827cf4e5-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.920810 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/680c351c-7945-4752-83ce-8cdd827cf4e5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.922164 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/680c351c-7945-4752-83ce-8cdd827cf4e5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.942636 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/680c351c-7945-4752-83ce-8cdd827cf4e5-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.942856 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/680c351c-7945-4752-83ce-8cdd827cf4e5-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.947382 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-642vv\" (UniqueName: \"kubernetes.io/projected/680c351c-7945-4752-83ce-8cdd827cf4e5-kube-api-access-642vv\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-z64ld\" (UID: \"680c351c-7945-4752-83ce-8cdd827cf4e5\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.957327 4727 generic.go:334] "Generic (PLEG): container finished" podID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerID="72e10766490a77b74e4c88810cf29a6ede0f5aa7806c7b57950c7dd4650b544f" exitCode=0 Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.957358 4727 generic.go:334] "Generic (PLEG): container finished" podID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerID="889b6b0ee3858397d2ecb8d7c16ef0a44f29c29044a7a1147d6af3114da80261" exitCode=0 Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.957380 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" event={"ID":"620908f6-27df-4ab9-a272-5a3bf56b2e81","Type":"ContainerDied","Data":"72e10766490a77b74e4c88810cf29a6ede0f5aa7806c7b57950c7dd4650b544f"} Dec 10 15:45:54 crc kubenswrapper[4727]: I1210 15:45:54.957404 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" event={"ID":"620908f6-27df-4ab9-a272-5a3bf56b2e81","Type":"ContainerDied","Data":"889b6b0ee3858397d2ecb8d7c16ef0a44f29c29044a7a1147d6af3114da80261"} Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.008758 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.060752 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.517796 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs"] Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.676940 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t"] Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.820661 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld"] Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.985125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" event={"ID":"0f171f74-b9e1-42a3-9907-4d34dabee6c2","Type":"ContainerStarted","Data":"e82ac8aa887ef86ccd90cec36c8e98cf886d624e8dfd1e7d426bc70f54b9b2b3"} Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.987721 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" event={"ID":"680c351c-7945-4752-83ce-8cdd827cf4e5","Type":"ContainerStarted","Data":"884b2449d1296c5439275cd416c00348a632a51f1b6807eee05c9cc016bed467"} Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.993770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" event={"ID":"620908f6-27df-4ab9-a272-5a3bf56b2e81","Type":"ContainerDied","Data":"c9bd00b524fbd26d711db087bc29ee105c3ef71ad064ca96276fffaed8cf5d22"} Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.993849 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9bd00b524fbd26d711db087bc29ee105c3ef71ad064ca96276fffaed8cf5d22" Dec 10 15:45:55 crc kubenswrapper[4727]: I1210 15:45:55.996869 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" event={"ID":"b2a7f948-3565-42cf-81ed-c16f1b770019","Type":"ContainerStarted","Data":"a503ee296038b7bbc789332838d3e837dfc89bbe532b2d0a02e28f57e277339d"} Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.076602 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.261837 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/620908f6-27df-4ab9-a272-5a3bf56b2e81-manager-config\") pod \"620908f6-27df-4ab9-a272-5a3bf56b2e81\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.262271 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m62lr\" (UniqueName: \"kubernetes.io/projected/620908f6-27df-4ab9-a272-5a3bf56b2e81-kube-api-access-m62lr\") pod \"620908f6-27df-4ab9-a272-5a3bf56b2e81\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.262432 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-apiservice-cert\") pod \"620908f6-27df-4ab9-a272-5a3bf56b2e81\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.262482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-webhook-cert\") pod \"620908f6-27df-4ab9-a272-5a3bf56b2e81\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.262536 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-loki-operator-metrics-cert\") pod \"620908f6-27df-4ab9-a272-5a3bf56b2e81\" (UID: \"620908f6-27df-4ab9-a272-5a3bf56b2e81\") " Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.270226 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "620908f6-27df-4ab9-a272-5a3bf56b2e81" (UID: "620908f6-27df-4ab9-a272-5a3bf56b2e81"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.270271 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-loki-operator-metrics-cert" (OuterVolumeSpecName: "loki-operator-metrics-cert") pod "620908f6-27df-4ab9-a272-5a3bf56b2e81" (UID: "620908f6-27df-4ab9-a272-5a3bf56b2e81"). InnerVolumeSpecName "loki-operator-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.270266 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "620908f6-27df-4ab9-a272-5a3bf56b2e81" (UID: "620908f6-27df-4ab9-a272-5a3bf56b2e81"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.271107 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620908f6-27df-4ab9-a272-5a3bf56b2e81-kube-api-access-m62lr" (OuterVolumeSpecName: "kube-api-access-m62lr") pod "620908f6-27df-4ab9-a272-5a3bf56b2e81" (UID: "620908f6-27df-4ab9-a272-5a3bf56b2e81"). InnerVolumeSpecName "kube-api-access-m62lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.294896 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620908f6-27df-4ab9-a272-5a3bf56b2e81-manager-config" (OuterVolumeSpecName: "manager-config") pod "620908f6-27df-4ab9-a272-5a3bf56b2e81" (UID: "620908f6-27df-4ab9-a272-5a3bf56b2e81"). InnerVolumeSpecName "manager-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.365854 4727 reconciler_common.go:293] "Volume detached for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/620908f6-27df-4ab9-a272-5a3bf56b2e81-manager-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.365948 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m62lr\" (UniqueName: \"kubernetes.io/projected/620908f6-27df-4ab9-a272-5a3bf56b2e81-kube-api-access-m62lr\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.365963 4727 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.365978 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:56 crc kubenswrapper[4727]: I1210 15:45:56.365992 4727 reconciler_common.go:293] "Volume detached for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/620908f6-27df-4ab9-a272-5a3bf56b2e81-loki-operator-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:57 crc kubenswrapper[4727]: I1210 15:45:57.005731 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt" Dec 10 15:45:57 crc kubenswrapper[4727]: I1210 15:45:57.037627 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt"] Dec 10 15:45:57 crc kubenswrapper[4727]: I1210 15:45:57.053189 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-767689bfb5-7cvkt"] Dec 10 15:45:57 crc kubenswrapper[4727]: E1210 15:45:57.574269 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:45:58 crc kubenswrapper[4727]: I1210 15:45:58.580153 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620908f6-27df-4ab9-a272-5a3bf56b2e81" path="/var/lib/kubelet/pods/620908f6-27df-4ab9-a272-5a3bf56b2e81/volumes" Dec 10 15:45:59 crc kubenswrapper[4727]: E1210 15:45:59.564772 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:46:00 crc kubenswrapper[4727]: I1210 15:46:00.063707 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" event={"ID":"680c351c-7945-4752-83ce-8cdd827cf4e5","Type":"ContainerStarted","Data":"4ca9364f9a63f25a3036160dfbec7d9d932f18b750f20144b2919b014bd9ebd0"} Dec 10 15:46:00 crc kubenswrapper[4727]: I1210 15:46:00.065301 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:46:00 crc kubenswrapper[4727]: I1210 15:46:00.071086 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" event={"ID":"b2a7f948-3565-42cf-81ed-c16f1b770019","Type":"ContainerStarted","Data":"82b3d887b32f8326694345d5ed7a1e574a7419697865094379f95536b7cb9928"} Dec 10 15:46:00 crc kubenswrapper[4727]: I1210 15:46:00.071612 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:46:00 crc kubenswrapper[4727]: I1210 15:46:00.084676 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" event={"ID":"0f171f74-b9e1-42a3-9907-4d34dabee6c2","Type":"ContainerStarted","Data":"bc726c47468f20e5aef8e2b950c34896f42ca04d73d5628a29477cb8cfb93da9"} Dec 10 15:46:00 crc kubenswrapper[4727]: I1210 15:46:00.085033 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:46:00 crc kubenswrapper[4727]: I1210 15:46:00.099159 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" podStartSLOduration=2.701553413 podStartE2EDuration="6.099138121s" podCreationTimestamp="2025-12-10 15:45:54 +0000 UTC" firstStartedPulling="2025-12-10 15:45:55.947156021 +0000 UTC m=+4460.141930563" lastFinishedPulling="2025-12-10 15:45:59.344740729 +0000 UTC m=+4463.539515271" observedRunningTime="2025-12-10 15:46:00.083589159 +0000 UTC m=+4464.278363701" watchObservedRunningTime="2025-12-10 15:46:00.099138121 +0000 UTC m=+4464.293912663" Dec 10 15:46:00 crc kubenswrapper[4727]: I1210 15:46:00.114110 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" podStartSLOduration=2.727128168 podStartE2EDuration="6.114084918s" podCreationTimestamp="2025-12-10 15:45:54 +0000 UTC" firstStartedPulling="2025-12-10 15:45:55.941435227 +0000 UTC m=+4460.136209769" lastFinishedPulling="2025-12-10 15:45:59.328391977 +0000 UTC m=+4463.523166519" observedRunningTime="2025-12-10 15:46:00.104407994 +0000 UTC m=+4464.299182536" watchObservedRunningTime="2025-12-10 15:46:00.114084918 +0000 UTC m=+4464.308859460" Dec 10 15:46:00 crc kubenswrapper[4727]: I1210 15:46:00.144708 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" podStartSLOduration=2.765125948 podStartE2EDuration="6.144682351s" podCreationTimestamp="2025-12-10 15:45:54 +0000 UTC" firstStartedPulling="2025-12-10 15:45:55.949157392 +0000 UTC m=+4460.143931934" lastFinishedPulling="2025-12-10 15:45:59.328713795 +0000 UTC m=+4463.523488337" observedRunningTime="2025-12-10 15:46:00.129253021 +0000 UTC m=+4464.324027563" watchObservedRunningTime="2025-12-10 15:46:00.144682351 +0000 UTC m=+4464.339456893" Dec 10 15:46:01 crc kubenswrapper[4727]: I1210 15:46:01.375507 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:01 crc kubenswrapper[4727]: I1210 15:46:01.421159 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" containerName="loki-compactor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:11 crc kubenswrapper[4727]: I1210 15:46:11.374514 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:11 crc kubenswrapper[4727]: I1210 15:46:11.420722 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" containerName="loki-compactor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:11 crc kubenswrapper[4727]: E1210 15:46:11.565382 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:46:12 crc kubenswrapper[4727]: E1210 15:46:12.566038 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.551830 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.552359 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" containerName="loki-index-gateway" containerID="cri-o://e0a379981c73b973631f01231f5b1510b285b57dde56a1cb1358a7a9ac72da91" gracePeriod=30 Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.599587 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k"] Dec 10 15:46:13 crc kubenswrapper[4727]: E1210 15:46:13.600235 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerName="kube-rbac-proxy" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.600254 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerName="kube-rbac-proxy" Dec 10 15:46:13 crc kubenswrapper[4727]: E1210 15:46:13.600295 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerName="manager" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.600305 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerName="manager" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.600580 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerName="kube-rbac-proxy" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.600607 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="620908f6-27df-4ab9-a272-5a3bf56b2e81" containerName="manager" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.601805 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.620429 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k"] Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.732141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.732282 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86vhk\" (UniqueName: \"kubernetes.io/projected/350e96e2-3fa8-4673-a53e-f925e5922be3-kube-api-access-86vhk\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.732410 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.732597 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.732740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/350e96e2-3fa8-4673-a53e-f925e5922be3-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.732879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.733181 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.733283 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/350e96e2-3fa8-4673-a53e-f925e5922be3-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.733342 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.835856 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.835966 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.835998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/350e96e2-3fa8-4673-a53e-f925e5922be3-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.836020 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.836086 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.836152 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86vhk\" (UniqueName: \"kubernetes.io/projected/350e96e2-3fa8-4673-a53e-f925e5922be3-kube-api-access-86vhk\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.836372 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.836412 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.836440 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/350e96e2-3fa8-4673-a53e-f925e5922be3-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.837022 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.837219 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.837491 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.837796 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.838062 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/350e96e2-3fa8-4673-a53e-f925e5922be3-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.842240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/350e96e2-3fa8-4673-a53e-f925e5922be3-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.848416 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/350e96e2-3fa8-4673-a53e-f925e5922be3-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.855566 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/350e96e2-3fa8-4673-a53e-f925e5922be3-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.864160 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86vhk\" (UniqueName: \"kubernetes.io/projected/350e96e2-3fa8-4673-a53e-f925e5922be3-kube-api-access-86vhk\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-jj92k\" (UID: \"350e96e2-3fa8-4673-a53e-f925e5922be3\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:13 crc kubenswrapper[4727]: I1210 15:46:13.935503 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:14 crc kubenswrapper[4727]: I1210 15:46:14.504423 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k"] Dec 10 15:46:14 crc kubenswrapper[4727]: I1210 15:46:14.801136 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-q5hvs" Dec 10 15:46:14 crc kubenswrapper[4727]: I1210 15:46:14.864006 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-hs594"] Dec 10 15:46:14 crc kubenswrapper[4727]: I1210 15:46:14.864473 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" containerName="loki-distributor" containerID="cri-o://73168f026b7ee2a0518c131b8a8d944ae49445e3d21af643bce690d59f2f9790" gracePeriod=30 Dec 10 15:46:15 crc kubenswrapper[4727]: I1210 15:46:15.195597 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-tbx4t" Dec 10 15:46:15 crc kubenswrapper[4727]: I1210 15:46:15.200445 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-z64ld" Dec 10 15:46:15 crc kubenswrapper[4727]: I1210 15:46:15.302868 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" event={"ID":"350e96e2-3fa8-4673-a53e-f925e5922be3","Type":"ContainerStarted","Data":"0c24061c34a4891330928eef3f19aff91522cd9c8aa359daf38153cb39befcbe"} Dec 10 15:46:15 crc kubenswrapper[4727]: I1210 15:46:15.344115 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-9776w"] Dec 10 15:46:15 crc kubenswrapper[4727]: I1210 15:46:15.344415 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" podUID="2375881f-bb87-45f5-83a1-a314f445945d" containerName="loki-querier" containerID="cri-o://9c25f8de23aa7cb02b99c56c1218e7f2a68b907c896fcb837b5620184abe3873" gracePeriod=30 Dec 10 15:46:15 crc kubenswrapper[4727]: I1210 15:46:15.358346 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh"] Dec 10 15:46:15 crc kubenswrapper[4727]: I1210 15:46:15.358554 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" podUID="382fdc05-9257-41ca-a032-1dc84b1483e3" containerName="loki-query-frontend" containerID="cri-o://8a80f8a4651493d653683196e9a9da7d12c13d8d0272e1faf3201d7c6dfddf83" gracePeriod=30 Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.332332 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" event={"ID":"350e96e2-3fa8-4673-a53e-f925e5922be3","Type":"ContainerStarted","Data":"128d346bb898d5028eb83d5140aa235f98ae5c2d66f926f63f0fc1e008c21da9"} Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.332875 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.343800 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.356175 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-jj92k" podStartSLOduration=2.787036509 podStartE2EDuration="5.356148336s" podCreationTimestamp="2025-12-10 15:46:13 +0000 UTC" firstStartedPulling="2025-12-10 15:46:14.506547968 +0000 UTC m=+4478.701322510" lastFinishedPulling="2025-12-10 15:46:17.075659805 +0000 UTC m=+4481.270434337" observedRunningTime="2025-12-10 15:46:18.349533549 +0000 UTC m=+4482.544308091" watchObservedRunningTime="2025-12-10 15:46:18.356148336 +0000 UTC m=+4482.550922878" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.413343 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf"] Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.413589 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" podUID="98542d0d-88a6-49f6-a86e-36a5ba2d7b18" containerName="gateway" containerID="cri-o://87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15" gracePeriod=30 Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.471460 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p"] Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.473396 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.490724 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p"] Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.582107 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.582237 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8w9b\" (UniqueName: \"kubernetes.io/projected/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-kube-api-access-k8w9b\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.582334 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.582374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.582421 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.582479 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.582587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.582705 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.582752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.698549 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.698617 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.698703 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.698746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8w9b\" (UniqueName: \"kubernetes.io/projected/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-kube-api-access-k8w9b\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.698799 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.698818 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.698845 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.698874 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.698936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.700225 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.700618 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.701606 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.701621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.701648 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.714334 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.714966 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.715481 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.720849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8w9b\" (UniqueName: \"kubernetes.io/projected/d6dcffc0-6984-4b52-b5aa-e6703e78a9c0-kube-api-access-k8w9b\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-h865p\" (UID: \"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:18 crc kubenswrapper[4727]: I1210 15:46:18.796947 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:19 crc kubenswrapper[4727]: I1210 15:46:19.323557 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p"] Dec 10 15:46:19 crc kubenswrapper[4727]: W1210 15:46:19.332890 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6dcffc0_6984_4b52_b5aa_e6703e78a9c0.slice/crio-500320a58829d16886454c27c5d3393b4ac103b99f38fa2f40084886fdcd4b17 WatchSource:0}: Error finding container 500320a58829d16886454c27c5d3393b4ac103b99f38fa2f40084886fdcd4b17: Status 404 returned error can't find the container with id 500320a58829d16886454c27c5d3393b4ac103b99f38fa2f40084886fdcd4b17 Dec 10 15:46:19 crc kubenswrapper[4727]: I1210 15:46:19.845217 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" containerName="loki-distributor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.082577 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" podUID="2375881f-bb87-45f5-83a1-a314f445945d" containerName="loki-querier" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.084627 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.240402 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-client-http\") pod \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.240765 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-ca-bundle\") pod \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.240820 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-rbac\") pod \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.240864 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tenants\") pod \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.241002 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grkds\" (UniqueName: \"kubernetes.io/projected/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-kube-api-access-grkds\") pod \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.241112 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-lokistack-gateway\") pod \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.241151 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-ca-bundle\") pod \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.241210 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tls-secret\") pod \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.241260 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-ca-bundle\") pod \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\" (UID: \"98542d0d-88a6-49f6-a86e-36a5ba2d7b18\") " Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.241377 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "98542d0d-88a6-49f6-a86e-36a5ba2d7b18" (UID: "98542d0d-88a6-49f6-a86e-36a5ba2d7b18"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.241738 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.242273 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-gateway-ca-bundle") pod "98542d0d-88a6-49f6-a86e-36a5ba2d7b18" (UID: "98542d0d-88a6-49f6-a86e-36a5ba2d7b18"). InnerVolumeSpecName "cloudkitty-lokistack-gateway-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.247669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-kube-api-access-grkds" (OuterVolumeSpecName: "kube-api-access-grkds") pod "98542d0d-88a6-49f6-a86e-36a5ba2d7b18" (UID: "98542d0d-88a6-49f6-a86e-36a5ba2d7b18"). InnerVolumeSpecName "kube-api-access-grkds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.249718 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-client-http" (OuterVolumeSpecName: "cloudkitty-lokistack-gateway-client-http") pod "98542d0d-88a6-49f6-a86e-36a5ba2d7b18" (UID: "98542d0d-88a6-49f6-a86e-36a5ba2d7b18"). InnerVolumeSpecName "cloudkitty-lokistack-gateway-client-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.252003 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-ca-bundle" (OuterVolumeSpecName: "cloudkitty-ca-bundle") pod "98542d0d-88a6-49f6-a86e-36a5ba2d7b18" (UID: "98542d0d-88a6-49f6-a86e-36a5ba2d7b18"). InnerVolumeSpecName "cloudkitty-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.253682 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tls-secret" (OuterVolumeSpecName: "tls-secret") pod "98542d0d-88a6-49f6-a86e-36a5ba2d7b18" (UID: "98542d0d-88a6-49f6-a86e-36a5ba2d7b18"). InnerVolumeSpecName "tls-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.278604 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-rbac" (OuterVolumeSpecName: "rbac") pod "98542d0d-88a6-49f6-a86e-36a5ba2d7b18" (UID: "98542d0d-88a6-49f6-a86e-36a5ba2d7b18"). InnerVolumeSpecName "rbac". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.283377 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tenants" (OuterVolumeSpecName: "tenants") pod "98542d0d-88a6-49f6-a86e-36a5ba2d7b18" (UID: "98542d0d-88a6-49f6-a86e-36a5ba2d7b18"). InnerVolumeSpecName "tenants". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.318112 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-lokistack-gateway" (OuterVolumeSpecName: "lokistack-gateway") pod "98542d0d-88a6-49f6-a86e-36a5ba2d7b18" (UID: "98542d0d-88a6-49f6-a86e-36a5ba2d7b18"). InnerVolumeSpecName "lokistack-gateway". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.344535 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.344576 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-lokistack-gateway-client-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.344589 4727 reconciler_common.go:293] "Volume detached for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-rbac\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.344598 4727 reconciler_common.go:293] "Volume detached for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tenants\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.344607 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grkds\" (UniqueName: \"kubernetes.io/projected/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-kube-api-access-grkds\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.344617 4727 reconciler_common.go:293] "Volume detached for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-lokistack-gateway\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.344627 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-cloudkitty-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.344635 4727 reconciler_common.go:293] "Volume detached for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/98542d0d-88a6-49f6-a86e-36a5ba2d7b18-tls-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.362441 4727 generic.go:334] "Generic (PLEG): container finished" podID="98542d0d-88a6-49f6-a86e-36a5ba2d7b18" containerID="87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15" exitCode=0 Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.362500 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" event={"ID":"98542d0d-88a6-49f6-a86e-36a5ba2d7b18","Type":"ContainerDied","Data":"87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15"} Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.362527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" event={"ID":"98542d0d-88a6-49f6-a86e-36a5ba2d7b18","Type":"ContainerDied","Data":"9de831fd4dea4cc60bf2c740b29026f20c8a3c40dcfa6973be9c5d46a3330dc3"} Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.362542 4727 scope.go:117] "RemoveContainer" containerID="87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.362667 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.369841 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" event={"ID":"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0","Type":"ContainerStarted","Data":"234127d9c5e868844bbcf3d23337289396103896e067755c22b8592fbe777f9d"} Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.369883 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" event={"ID":"d6dcffc0-6984-4b52-b5aa-e6703e78a9c0","Type":"ContainerStarted","Data":"500320a58829d16886454c27c5d3393b4ac103b99f38fa2f40084886fdcd4b17"} Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.394864 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" podStartSLOduration=2.394849004 podStartE2EDuration="2.394849004s" podCreationTimestamp="2025-12-10 15:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:20.385220411 +0000 UTC m=+4484.579994953" watchObservedRunningTime="2025-12-10 15:46:20.394849004 +0000 UTC m=+4484.589623546" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.398677 4727 scope.go:117] "RemoveContainer" containerID="87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15" Dec 10 15:46:20 crc kubenswrapper[4727]: E1210 15:46:20.399321 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15\": container with ID starting with 87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15 not found: ID does not exist" containerID="87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.399432 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15"} err="failed to get container status \"87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15\": rpc error: code = NotFound desc = could not find container \"87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15\": container with ID starting with 87f5e91cf6e1c2c4d75918df670f374f0da57777ee8e6b7415817303e6bc3d15 not found: ID does not exist" Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.417562 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf"] Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.429319 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-kvpwf"] Dec 10 15:46:20 crc kubenswrapper[4727]: I1210 15:46:20.579258 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98542d0d-88a6-49f6-a86e-36a5ba2d7b18" path="/var/lib/kubelet/pods/98542d0d-88a6-49f6-a86e-36a5ba2d7b18/volumes" Dec 10 15:46:21 crc kubenswrapper[4727]: I1210 15:46:21.374757 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:21 crc kubenswrapper[4727]: I1210 15:46:21.375111 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:21 crc kubenswrapper[4727]: I1210 15:46:21.383222 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:21 crc kubenswrapper[4727]: I1210 15:46:21.407059 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-h865p" Dec 10 15:46:21 crc kubenswrapper[4727]: I1210 15:46:21.423290 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" containerName="loki-index-gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:21 crc kubenswrapper[4727]: I1210 15:46:21.428757 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" containerName="loki-compactor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:21 crc kubenswrapper[4727]: I1210 15:46:21.428864 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:21 crc kubenswrapper[4727]: I1210 15:46:21.491371 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv"] Dec 10 15:46:21 crc kubenswrapper[4727]: I1210 15:46:21.491593 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" podUID="26378e88-9d49-423c-b1a9-91a70f08b760" containerName="gateway" containerID="cri-o://e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9" gracePeriod=30 Dec 10 15:46:22 crc kubenswrapper[4727]: E1210 15:46:22.565556 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.078853 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.216248 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-client-http\") pod \"26378e88-9d49-423c-b1a9-91a70f08b760\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.216485 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jbks\" (UniqueName: \"kubernetes.io/projected/26378e88-9d49-423c-b1a9-91a70f08b760-kube-api-access-2jbks\") pod \"26378e88-9d49-423c-b1a9-91a70f08b760\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.216537 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tenants\") pod \"26378e88-9d49-423c-b1a9-91a70f08b760\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.216604 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tls-secret\") pod \"26378e88-9d49-423c-b1a9-91a70f08b760\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.216661 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-ca-bundle\") pod \"26378e88-9d49-423c-b1a9-91a70f08b760\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.216692 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-ca-bundle\") pod \"26378e88-9d49-423c-b1a9-91a70f08b760\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.216760 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-rbac\") pod \"26378e88-9d49-423c-b1a9-91a70f08b760\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.216868 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-lokistack-gateway\") pod \"26378e88-9d49-423c-b1a9-91a70f08b760\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.216934 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-ca-bundle\") pod \"26378e88-9d49-423c-b1a9-91a70f08b760\" (UID: \"26378e88-9d49-423c-b1a9-91a70f08b760\") " Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.217370 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-ca-bundle" (OuterVolumeSpecName: "cloudkitty-ca-bundle") pod "26378e88-9d49-423c-b1a9-91a70f08b760" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760"). InnerVolumeSpecName "cloudkitty-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.217392 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "26378e88-9d49-423c-b1a9-91a70f08b760" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.217744 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.217769 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.217950 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-gateway-ca-bundle") pod "26378e88-9d49-423c-b1a9-91a70f08b760" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760"). InnerVolumeSpecName "cloudkitty-lokistack-gateway-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.223453 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-client-http" (OuterVolumeSpecName: "cloudkitty-lokistack-gateway-client-http") pod "26378e88-9d49-423c-b1a9-91a70f08b760" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760"). InnerVolumeSpecName "cloudkitty-lokistack-gateway-client-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.224919 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26378e88-9d49-423c-b1a9-91a70f08b760-kube-api-access-2jbks" (OuterVolumeSpecName: "kube-api-access-2jbks") pod "26378e88-9d49-423c-b1a9-91a70f08b760" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760"). InnerVolumeSpecName "kube-api-access-2jbks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.230356 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tls-secret" (OuterVolumeSpecName: "tls-secret") pod "26378e88-9d49-423c-b1a9-91a70f08b760" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760"). InnerVolumeSpecName "tls-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.256852 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-lokistack-gateway" (OuterVolumeSpecName: "lokistack-gateway") pod "26378e88-9d49-423c-b1a9-91a70f08b760" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760"). InnerVolumeSpecName "lokistack-gateway". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.259084 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tenants" (OuterVolumeSpecName: "tenants") pod "26378e88-9d49-423c-b1a9-91a70f08b760" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760"). InnerVolumeSpecName "tenants". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.259490 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-rbac" (OuterVolumeSpecName: "rbac") pod "26378e88-9d49-423c-b1a9-91a70f08b760" (UID: "26378e88-9d49-423c-b1a9-91a70f08b760"). InnerVolumeSpecName "rbac". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.319541 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jbks\" (UniqueName: \"kubernetes.io/projected/26378e88-9d49-423c-b1a9-91a70f08b760-kube-api-access-2jbks\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.319583 4727 reconciler_common.go:293] "Volume detached for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tenants\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.319596 4727 reconciler_common.go:293] "Volume detached for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-tls-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.319604 4727 reconciler_common.go:293] "Volume detached for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-rbac\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.319613 4727 reconciler_common.go:293] "Volume detached for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-lokistack-gateway\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.319623 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.319634 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/26378e88-9d49-423c-b1a9-91a70f08b760-cloudkitty-lokistack-gateway-client-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.408935 4727 generic.go:334] "Generic (PLEG): container finished" podID="26378e88-9d49-423c-b1a9-91a70f08b760" containerID="e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9" exitCode=0 Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.410297 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.422109 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" event={"ID":"26378e88-9d49-423c-b1a9-91a70f08b760","Type":"ContainerDied","Data":"e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9"} Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.422188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv" event={"ID":"26378e88-9d49-423c-b1a9-91a70f08b760","Type":"ContainerDied","Data":"2d949001bd031eb276affe2877221817fc8ed3a38dfcd7146e7dd30ff47a5f12"} Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.422212 4727 scope.go:117] "RemoveContainer" containerID="e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.469619 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv"] Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.481787 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-ltztv"] Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.487436 4727 scope.go:117] "RemoveContainer" containerID="e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9" Dec 10 15:46:23 crc kubenswrapper[4727]: E1210 15:46:23.488475 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9\": container with ID starting with e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9 not found: ID does not exist" containerID="e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9" Dec 10 15:46:23 crc kubenswrapper[4727]: I1210 15:46:23.488603 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9"} err="failed to get container status \"e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9\": rpc error: code = NotFound desc = could not find container \"e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9\": container with ID starting with e3dd0829bacadc2efbebb51ebf9940dab237c7ba721c521f8952f2cadfd420b9 not found: ID does not exist" Dec 10 15:46:24 crc kubenswrapper[4727]: I1210 15:46:24.577085 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26378e88-9d49-423c-b1a9-91a70f08b760" path="/var/lib/kubelet/pods/26378e88-9d49-423c-b1a9-91a70f08b760/volumes" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.432762 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc5506b0-8e70-49c6-8783-5802aca6f72e" containerID="6c32b0d6e5e6077b749e9c0363ea9cf18f2d35e78f9a420832ce9108a8f6a5f2" exitCode=137 Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.432835 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"cc5506b0-8e70-49c6-8783-5802aca6f72e","Type":"ContainerDied","Data":"6c32b0d6e5e6077b749e9c0363ea9cf18f2d35e78f9a420832ce9108a8f6a5f2"} Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.435246 4727 generic.go:334] "Generic (PLEG): container finished" podID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerID="94bbc5178158254c69445da8de8b9cb648167455f1604df38821cd96546171bc" exitCode=137 Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.435283 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"f429b94a-d632-41b7-85c3-584f6dfe4475","Type":"ContainerDied","Data":"94bbc5178158254c69445da8de8b9cb648167455f1604df38821cd96546171bc"} Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.774176 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.784208 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.873665 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-config\") pod \"cc5506b0-8e70-49c6-8783-5802aca6f72e\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.873732 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-ca-bundle\") pod \"cc5506b0-8e70-49c6-8783-5802aca6f72e\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.873759 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cc5506b0-8e70-49c6-8783-5802aca6f72e\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.873823 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx7tb\" (UniqueName: \"kubernetes.io/projected/f429b94a-d632-41b7-85c3-584f6dfe4475-kube-api-access-sx7tb\") pod \"f429b94a-d632-41b7-85c3-584f6dfe4475\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.873934 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-grpc\") pod \"cc5506b0-8e70-49c6-8783-5802aca6f72e\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.873968 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ca-bundle\") pod \"f429b94a-d632-41b7-85c3-584f6dfe4475\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.873992 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkrcx\" (UniqueName: \"kubernetes.io/projected/cc5506b0-8e70-49c6-8783-5802aca6f72e-kube-api-access-dkrcx\") pod \"cc5506b0-8e70-49c6-8783-5802aca6f72e\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874010 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"f429b94a-d632-41b7-85c3-584f6dfe4475\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874086 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"wal\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f429b94a-d632-41b7-85c3-584f6dfe4475\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874169 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-config\") pod \"f429b94a-d632-41b7-85c3-584f6dfe4475\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874206 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-loki-s3\") pod \"cc5506b0-8e70-49c6-8783-5802aca6f72e\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874258 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-http\") pod \"f429b94a-d632-41b7-85c3-584f6dfe4475\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874311 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-http\") pod \"cc5506b0-8e70-49c6-8783-5802aca6f72e\" (UID: \"cc5506b0-8e70-49c6-8783-5802aca6f72e\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874362 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-loki-s3\") pod \"f429b94a-d632-41b7-85c3-584f6dfe4475\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-grpc\") pod \"f429b94a-d632-41b7-85c3-584f6dfe4475\" (UID: \"f429b94a-d632-41b7-85c3-584f6dfe4475\") " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874448 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-config" (OuterVolumeSpecName: "config") pod "cc5506b0-8e70-49c6-8783-5802aca6f72e" (UID: "cc5506b0-8e70-49c6-8783-5802aca6f72e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874489 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "f429b94a-d632-41b7-85c3-584f6dfe4475" (UID: "f429b94a-d632-41b7-85c3-584f6dfe4475"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874495 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "cc5506b0-8e70-49c6-8783-5802aca6f72e" (UID: "cc5506b0-8e70-49c6-8783-5802aca6f72e"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.874885 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-config" (OuterVolumeSpecName: "config") pod "f429b94a-d632-41b7-85c3-584f6dfe4475" (UID: "f429b94a-d632-41b7-85c3-584f6dfe4475"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.875049 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.875067 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.875078 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.875091 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.880874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-http" (OuterVolumeSpecName: "cloudkitty-lokistack-compactor-http") pod "cc5506b0-8e70-49c6-8783-5802aca6f72e" (UID: "cc5506b0-8e70-49c6-8783-5802aca6f72e"). InnerVolumeSpecName "cloudkitty-lokistack-compactor-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.880958 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "storage") pod "f429b94a-d632-41b7-85c3-584f6dfe4475" (UID: "f429b94a-d632-41b7-85c3-584f6dfe4475"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.881182 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-ingester-grpc") pod "f429b94a-d632-41b7-85c3-584f6dfe4475" (UID: "f429b94a-d632-41b7-85c3-584f6dfe4475"). InnerVolumeSpecName "cloudkitty-lokistack-ingester-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.881361 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f429b94a-d632-41b7-85c3-584f6dfe4475-kube-api-access-sx7tb" (OuterVolumeSpecName: "kube-api-access-sx7tb") pod "f429b94a-d632-41b7-85c3-584f6dfe4475" (UID: "f429b94a-d632-41b7-85c3-584f6dfe4475"). InnerVolumeSpecName "kube-api-access-sx7tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.882334 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-loki-s3" (OuterVolumeSpecName: "cloudkitty-loki-s3") pod "cc5506b0-8e70-49c6-8783-5802aca6f72e" (UID: "cc5506b0-8e70-49c6-8783-5802aca6f72e"). InnerVolumeSpecName "cloudkitty-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.882519 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-http" (OuterVolumeSpecName: "cloudkitty-lokistack-ingester-http") pod "f429b94a-d632-41b7-85c3-584f6dfe4475" (UID: "f429b94a-d632-41b7-85c3-584f6dfe4475"). InnerVolumeSpecName "cloudkitty-lokistack-ingester-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.882729 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "storage") pod "cc5506b0-8e70-49c6-8783-5802aca6f72e" (UID: "cc5506b0-8e70-49c6-8783-5802aca6f72e"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.882772 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-loki-s3" (OuterVolumeSpecName: "cloudkitty-loki-s3") pod "f429b94a-d632-41b7-85c3-584f6dfe4475" (UID: "f429b94a-d632-41b7-85c3-584f6dfe4475"). InnerVolumeSpecName "cloudkitty-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.882780 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "wal") pod "f429b94a-d632-41b7-85c3-584f6dfe4475" (UID: "f429b94a-d632-41b7-85c3-584f6dfe4475"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.893446 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-compactor-grpc") pod "cc5506b0-8e70-49c6-8783-5802aca6f72e" (UID: "cc5506b0-8e70-49c6-8783-5802aca6f72e"). InnerVolumeSpecName "cloudkitty-lokistack-compactor-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.895101 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5506b0-8e70-49c6-8783-5802aca6f72e-kube-api-access-dkrcx" (OuterVolumeSpecName: "kube-api-access-dkrcx") pod "cc5506b0-8e70-49c6-8783-5802aca6f72e" (UID: "cc5506b0-8e70-49c6-8783-5802aca6f72e"). InnerVolumeSpecName "kube-api-access-dkrcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976220 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-loki-s3\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976311 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976349 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976367 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-loki-s3\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976382 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f429b94a-d632-41b7-85c3-584f6dfe4475-cloudkitty-lokistack-ingester-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976446 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976464 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx7tb\" (UniqueName: \"kubernetes.io/projected/f429b94a-d632-41b7-85c3-584f6dfe4475-kube-api-access-sx7tb\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976478 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cc5506b0-8e70-49c6-8783-5802aca6f72e-cloudkitty-lokistack-compactor-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976512 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkrcx\" (UniqueName: \"kubernetes.io/projected/cc5506b0-8e70-49c6-8783-5802aca6f72e-kube-api-access-dkrcx\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976539 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.976557 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 10 15:46:25 crc kubenswrapper[4727]: I1210 15:46:25.999007 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.000673 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.003136 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.077739 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.077777 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.077788 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.450594 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.450674 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"f429b94a-d632-41b7-85c3-584f6dfe4475","Type":"ContainerDied","Data":"78c5bd88a87496c0ebd76d36ae8ca963447c07f6168d78c7e5183a605b4c35af"} Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.450784 4727 scope.go:117] "RemoveContainer" containerID="94bbc5178158254c69445da8de8b9cb648167455f1604df38821cd96546171bc" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.456722 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"cc5506b0-8e70-49c6-8783-5802aca6f72e","Type":"ContainerDied","Data":"2050ae5f93f59179a7cdec5a626eb7c7e19585a2c2cf3437e03ff56454caa240"} Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.456804 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.496991 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.511952 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.521444 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.531230 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.583332 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" path="/var/lib/kubelet/pods/cc5506b0-8e70-49c6-8783-5802aca6f72e/volumes" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.584172 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" path="/var/lib/kubelet/pods/f429b94a-d632-41b7-85c3-584f6dfe4475/volumes" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.703562 4727 scope.go:117] "RemoveContainer" containerID="6c32b0d6e5e6077b749e9c0363ea9cf18f2d35e78f9a420832ce9108a8f6a5f2" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.717558 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:46:26 crc kubenswrapper[4727]: E1210 15:46:26.718134 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98542d0d-88a6-49f6-a86e-36a5ba2d7b18" containerName="gateway" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.718152 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="98542d0d-88a6-49f6-a86e-36a5ba2d7b18" containerName="gateway" Dec 10 15:46:26 crc kubenswrapper[4727]: E1210 15:46:26.718171 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" containerName="loki-compactor" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.718177 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" containerName="loki-compactor" Dec 10 15:46:26 crc kubenswrapper[4727]: E1210 15:46:26.718203 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.718208 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" Dec 10 15:46:26 crc kubenswrapper[4727]: E1210 15:46:26.718234 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26378e88-9d49-423c-b1a9-91a70f08b760" containerName="gateway" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.718240 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="26378e88-9d49-423c-b1a9-91a70f08b760" containerName="gateway" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.718430 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f429b94a-d632-41b7-85c3-584f6dfe4475" containerName="loki-ingester" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.718445 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="26378e88-9d49-423c-b1a9-91a70f08b760" containerName="gateway" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.718459 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5506b0-8e70-49c6-8783-5802aca6f72e" containerName="loki-compactor" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.718472 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="98542d0d-88a6-49f6-a86e-36a5ba2d7b18" containerName="gateway" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.719411 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.724283 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.724315 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.743125 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.744659 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.747339 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.747438 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.771610 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.783372 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.907087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.907399 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.907558 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.907657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.907698 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.907768 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.908001 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pqq\" (UniqueName: \"kubernetes.io/projected/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-kube-api-access-62pqq\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.908171 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.908221 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.908249 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11b517c-638f-4542-8a0f-c05cab3a8f7c-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.908293 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkdx2\" (UniqueName: \"kubernetes.io/projected/c11b517c-638f-4542-8a0f-c05cab3a8f7c-kube-api-access-pkdx2\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.908316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.908445 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.908624 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:26 crc kubenswrapper[4727]: I1210 15:46:26.908659 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.012321 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.012625 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.012770 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.012938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.013047 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.013176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.013303 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.013414 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.013581 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62pqq\" (UniqueName: \"kubernetes.io/projected/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-kube-api-access-62pqq\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.013754 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.013881 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.014019 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11b517c-638f-4542-8a0f-c05cab3a8f7c-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.014138 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.014151 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkdx2\" (UniqueName: \"kubernetes.io/projected/c11b517c-638f-4542-8a0f-c05cab3a8f7c-kube-api-access-pkdx2\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.014340 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.014498 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.014708 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.015614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.013252 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.014035 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.017306 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11b517c-638f-4542-8a0f-c05cab3a8f7c-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.013767 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.024098 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.025636 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.038925 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.038985 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.039462 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.040008 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/c11b517c-638f-4542-8a0f-c05cab3a8f7c-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.053933 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkdx2\" (UniqueName: \"kubernetes.io/projected/c11b517c-638f-4542-8a0f-c05cab3a8f7c-kube-api-access-pkdx2\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.072951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62pqq\" (UniqueName: \"kubernetes.io/projected/bbecc1f3-56d2-4852-8f9d-4397943c2b8b-kube-api-access-62pqq\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.091555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.123991 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"bbecc1f3-56d2-4852-8f9d-4397943c2b8b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.181882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c11b517c-638f-4542-8a0f-c05cab3a8f7c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.340891 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:27 crc kubenswrapper[4727]: I1210 15:46:27.371591 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:27 crc kubenswrapper[4727]: E1210 15:46:27.565062 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:46:28 crc kubenswrapper[4727]: I1210 15:46:28.435758 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:46:28 crc kubenswrapper[4727]: I1210 15:46:28.475527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"c11b517c-638f-4542-8a0f-c05cab3a8f7c","Type":"ContainerStarted","Data":"0228218f8f357c1259f6cf72365d2f90cdcbe259405eac480dfa3b09f0d1c27d"} Dec 10 15:46:28 crc kubenswrapper[4727]: I1210 15:46:28.554309 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:46:29 crc kubenswrapper[4727]: I1210 15:46:29.489113 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"bbecc1f3-56d2-4852-8f9d-4397943c2b8b","Type":"ContainerStarted","Data":"ac508f21eeb4668b65988c5eb941b885d2ddefb73a8ea68109929f3f12e7de00"} Dec 10 15:46:29 crc kubenswrapper[4727]: I1210 15:46:29.489504 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"bbecc1f3-56d2-4852-8f9d-4397943c2b8b","Type":"ContainerStarted","Data":"d7ca2e0de847fc04eb96e7dfa9421d7792d110deb80e9daf087e7ab835e605d0"} Dec 10 15:46:29 crc kubenswrapper[4727]: I1210 15:46:29.489529 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:46:29 crc kubenswrapper[4727]: I1210 15:46:29.490474 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"c11b517c-638f-4542-8a0f-c05cab3a8f7c","Type":"ContainerStarted","Data":"40feb32e2ec4ffac8bde10c24975be32480527752a79b4517e4965622181d11c"} Dec 10 15:46:29 crc kubenswrapper[4727]: I1210 15:46:29.491158 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:29 crc kubenswrapper[4727]: I1210 15:46:29.523042 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=3.523017639 podStartE2EDuration="3.523017639s" podCreationTimestamp="2025-12-10 15:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:29.50801204 +0000 UTC m=+4493.702786592" watchObservedRunningTime="2025-12-10 15:46:29.523017639 +0000 UTC m=+4493.717792181" Dec 10 15:46:29 crc kubenswrapper[4727]: I1210 15:46:29.560056 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=3.560031723 podStartE2EDuration="3.560031723s" podCreationTimestamp="2025-12-10 15:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:29.548235915 +0000 UTC m=+4493.743010457" watchObservedRunningTime="2025-12-10 15:46:29.560031723 +0000 UTC m=+4493.754806265" Dec 10 15:46:29 crc kubenswrapper[4727]: I1210 15:46:29.844345 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" containerName="loki-distributor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:30 crc kubenswrapper[4727]: I1210 15:46:30.208953 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" podUID="2375881f-bb87-45f5-83a1-a314f445945d" containerName="loki-querier" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:31 crc kubenswrapper[4727]: I1210 15:46:31.418780 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" containerName="loki-index-gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:37 crc kubenswrapper[4727]: E1210 15:46:37.567308 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:46:39 crc kubenswrapper[4727]: I1210 15:46:39.842964 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" containerName="loki-distributor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:39 crc kubenswrapper[4727]: I1210 15:46:39.843440 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 15:46:40 crc kubenswrapper[4727]: I1210 15:46:40.082735 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" podUID="2375881f-bb87-45f5-83a1-a314f445945d" containerName="loki-querier" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:40 crc kubenswrapper[4727]: I1210 15:46:40.082857 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 15:46:41 crc kubenswrapper[4727]: I1210 15:46:41.420884 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" containerName="loki-index-gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:41 crc kubenswrapper[4727]: I1210 15:46:41.421289 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:41 crc kubenswrapper[4727]: E1210 15:46:41.565831 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:46:43 crc kubenswrapper[4727]: I1210 15:46:43.662708 4727 generic.go:334] "Generic (PLEG): container finished" podID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" containerID="e0a379981c73b973631f01231f5b1510b285b57dde56a1cb1358a7a9ac72da91" exitCode=137 Dec 10 15:46:43 crc kubenswrapper[4727]: I1210 15:46:43.662814 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0","Type":"ContainerDied","Data":"e0a379981c73b973631f01231f5b1510b285b57dde56a1cb1358a7a9ac72da91"} Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.024013 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.137534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-grpc\") pod \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.137579 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8cc4\" (UniqueName: \"kubernetes.io/projected/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-kube-api-access-d8cc4\") pod \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.137665 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-ca-bundle\") pod \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.137758 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.137835 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-config\") pod \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.137969 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-http\") pod \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.137999 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-loki-s3\") pod \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\" (UID: \"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0\") " Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.139414 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" (UID: "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.139431 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-config" (OuterVolumeSpecName: "config") pod "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" (UID: "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.146001 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-http" (OuterVolumeSpecName: "cloudkitty-lokistack-index-gateway-http") pod "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" (UID: "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0"). InnerVolumeSpecName "cloudkitty-lokistack-index-gateway-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.146495 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-index-gateway-grpc") pod "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" (UID: "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0"). InnerVolumeSpecName "cloudkitty-lokistack-index-gateway-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.147271 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "storage") pod "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" (UID: "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.147945 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-kube-api-access-d8cc4" (OuterVolumeSpecName: "kube-api-access-d8cc4") pod "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" (UID: "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0"). InnerVolumeSpecName "kube-api-access-d8cc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.155108 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-loki-s3" (OuterVolumeSpecName: "cloudkitty-loki-s3") pod "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" (UID: "14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0"). InnerVolumeSpecName "cloudkitty-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.240104 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.240409 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8cc4\" (UniqueName: \"kubernetes.io/projected/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-kube-api-access-d8cc4\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.240421 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.240450 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.240463 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.240472 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-lokistack-index-gateway-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.240484 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0-cloudkitty-loki-s3\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.271698 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.341655 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.675200 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0","Type":"ContainerDied","Data":"8fcf7b0e64d9d676322bc24cfdf7f4918a4afd36de5132238b88a25a2ffc961e"} Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.675263 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.675280 4727 scope.go:117] "RemoveContainer" containerID="e0a379981c73b973631f01231f5b1510b285b57dde56a1cb1358a7a9ac72da91" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.707870 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.724216 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.741642 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:46:44 crc kubenswrapper[4727]: E1210 15:46:44.742380 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" containerName="loki-index-gateway" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.742402 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" containerName="loki-index-gateway" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.742677 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" containerName="loki-index-gateway" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.743855 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.746427 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.746678 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.759136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.759215 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.759295 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.759357 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddf8ff74-065e-4883-87c7-2bf30fccd234-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.759385 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.759646 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.759676 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dknmt\" (UniqueName: \"kubernetes.io/projected/ddf8ff74-065e-4883-87c7-2bf30fccd234-kube-api-access-dknmt\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.769525 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.862982 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.863085 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.863129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.863168 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddf8ff74-065e-4883-87c7-2bf30fccd234-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.863324 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.864215 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.864359 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.864395 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dknmt\" (UniqueName: \"kubernetes.io/projected/ddf8ff74-065e-4883-87c7-2bf30fccd234-kube-api-access-dknmt\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.864466 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddf8ff74-065e-4883-87c7-2bf30fccd234-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.864359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.874164 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.876205 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.886097 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ddf8ff74-065e-4883-87c7-2bf30fccd234-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.900406 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dknmt\" (UniqueName: \"kubernetes.io/projected/ddf8ff74-065e-4883-87c7-2bf30fccd234-kube-api-access-dknmt\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:44 crc kubenswrapper[4727]: I1210 15:46:44.914733 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ddf8ff74-065e-4883-87c7-2bf30fccd234\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:45 crc kubenswrapper[4727]: I1210 15:46:45.126309 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:45 crc kubenswrapper[4727]: I1210 15:46:45.699706 4727 generic.go:334] "Generic (PLEG): container finished" podID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" containerID="73168f026b7ee2a0518c131b8a8d944ae49445e3d21af643bce690d59f2f9790" exitCode=137 Dec 10 15:46:45 crc kubenswrapper[4727]: I1210 15:46:45.700203 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" event={"ID":"b9128ddd-7fb4-41b8-a218-2201b9ef57ad","Type":"ContainerDied","Data":"73168f026b7ee2a0518c131b8a8d944ae49445e3d21af643bce690d59f2f9790"} Dec 10 15:46:45 crc kubenswrapper[4727]: I1210 15:46:45.709440 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:46:45 crc kubenswrapper[4727]: I1210 15:46:45.711917 4727 generic.go:334] "Generic (PLEG): container finished" podID="2375881f-bb87-45f5-83a1-a314f445945d" containerID="9c25f8de23aa7cb02b99c56c1218e7f2a68b907c896fcb837b5620184abe3873" exitCode=137 Dec 10 15:46:45 crc kubenswrapper[4727]: I1210 15:46:45.711983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" event={"ID":"2375881f-bb87-45f5-83a1-a314f445945d","Type":"ContainerDied","Data":"9c25f8de23aa7cb02b99c56c1218e7f2a68b907c896fcb837b5620184abe3873"} Dec 10 15:46:45 crc kubenswrapper[4727]: I1210 15:46:45.738133 4727 generic.go:334] "Generic (PLEG): container finished" podID="382fdc05-9257-41ca-a032-1dc84b1483e3" containerID="8a80f8a4651493d653683196e9a9da7d12c13d8d0272e1faf3201d7c6dfddf83" exitCode=137 Dec 10 15:46:45 crc kubenswrapper[4727]: I1210 15:46:45.738282 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" event={"ID":"382fdc05-9257-41ca-a032-1dc84b1483e3","Type":"ContainerDied","Data":"8a80f8a4651493d653683196e9a9da7d12c13d8d0272e1faf3201d7c6dfddf83"} Dec 10 15:46:45 crc kubenswrapper[4727]: W1210 15:46:45.744661 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddf8ff74_065e_4883_87c7_2bf30fccd234.slice/crio-db19decad37d473738806db99a98c23acc22876f12399bcbb10ef5636dc29dea WatchSource:0}: Error finding container db19decad37d473738806db99a98c23acc22876f12399bcbb10ef5636dc29dea: Status 404 returned error can't find the container with id db19decad37d473738806db99a98c23acc22876f12399bcbb10ef5636dc29dea Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.133742 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.137465 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.270525 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-ca-bundle\") pod \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.270841 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-config\") pod \"2375881f-bb87-45f5-83a1-a314f445945d\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271023 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-loki-s3\") pod \"2375881f-bb87-45f5-83a1-a314f445945d\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271091 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-config\") pod \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271150 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-http\") pod \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271198 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-grpc\") pod \"2375881f-bb87-45f5-83a1-a314f445945d\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271259 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-grpc\") pod \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271321 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-ca-bundle\") pod \"2375881f-bb87-45f5-83a1-a314f445945d\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271366 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpsx4\" (UniqueName: \"kubernetes.io/projected/2375881f-bb87-45f5-83a1-a314f445945d-kube-api-access-lpsx4\") pod \"2375881f-bb87-45f5-83a1-a314f445945d\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95qb2\" (UniqueName: \"kubernetes.io/projected/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-kube-api-access-95qb2\") pod \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\" (UID: \"b9128ddd-7fb4-41b8-a218-2201b9ef57ad\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271439 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-http\") pod \"2375881f-bb87-45f5-83a1-a314f445945d\" (UID: \"2375881f-bb87-45f5-83a1-a314f445945d\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.271365 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "b9128ddd-7fb4-41b8-a218-2201b9ef57ad" (UID: "b9128ddd-7fb4-41b8-a218-2201b9ef57ad"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.278507 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-querier-grpc") pod "2375881f-bb87-45f5-83a1-a314f445945d" (UID: "2375881f-bb87-45f5-83a1-a314f445945d"). InnerVolumeSpecName "cloudkitty-lokistack-querier-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.278416 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-config" (OuterVolumeSpecName: "config") pod "b9128ddd-7fb4-41b8-a218-2201b9ef57ad" (UID: "b9128ddd-7fb4-41b8-a218-2201b9ef57ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.279155 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-http" (OuterVolumeSpecName: "cloudkitty-lokistack-querier-http") pod "2375881f-bb87-45f5-83a1-a314f445945d" (UID: "2375881f-bb87-45f5-83a1-a314f445945d"). InnerVolumeSpecName "cloudkitty-lokistack-querier-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.279233 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-loki-s3" (OuterVolumeSpecName: "cloudkitty-loki-s3") pod "2375881f-bb87-45f5-83a1-a314f445945d" (UID: "2375881f-bb87-45f5-83a1-a314f445945d"). InnerVolumeSpecName "cloudkitty-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.280089 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-config" (OuterVolumeSpecName: "config") pod "2375881f-bb87-45f5-83a1-a314f445945d" (UID: "2375881f-bb87-45f5-83a1-a314f445945d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.280280 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "2375881f-bb87-45f5-83a1-a314f445945d" (UID: "2375881f-bb87-45f5-83a1-a314f445945d"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.280547 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-http" (OuterVolumeSpecName: "cloudkitty-lokistack-distributor-http") pod "b9128ddd-7fb4-41b8-a218-2201b9ef57ad" (UID: "b9128ddd-7fb4-41b8-a218-2201b9ef57ad"). InnerVolumeSpecName "cloudkitty-lokistack-distributor-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.283248 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-distributor-grpc") pod "b9128ddd-7fb4-41b8-a218-2201b9ef57ad" (UID: "b9128ddd-7fb4-41b8-a218-2201b9ef57ad"). InnerVolumeSpecName "cloudkitty-lokistack-distributor-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.283592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2375881f-bb87-45f5-83a1-a314f445945d-kube-api-access-lpsx4" (OuterVolumeSpecName: "kube-api-access-lpsx4") pod "2375881f-bb87-45f5-83a1-a314f445945d" (UID: "2375881f-bb87-45f5-83a1-a314f445945d"). InnerVolumeSpecName "kube-api-access-lpsx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.283718 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-kube-api-access-95qb2" (OuterVolumeSpecName: "kube-api-access-95qb2") pod "b9128ddd-7fb4-41b8-a218-2201b9ef57ad" (UID: "b9128ddd-7fb4-41b8-a218-2201b9ef57ad"). InnerVolumeSpecName "kube-api-access-95qb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375279 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375314 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375326 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375338 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-loki-s3\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375350 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375359 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375369 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-querier-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375378 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-cloudkitty-lokistack-distributor-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375408 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2375881f-bb87-45f5-83a1-a314f445945d-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375418 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpsx4\" (UniqueName: \"kubernetes.io/projected/2375881f-bb87-45f5-83a1-a314f445945d-kube-api-access-lpsx4\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.375427 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95qb2\" (UniqueName: \"kubernetes.io/projected/b9128ddd-7fb4-41b8-a218-2201b9ef57ad-kube-api-access-95qb2\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.418249 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.578730 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-ca-bundle\") pod \"382fdc05-9257-41ca-a032-1dc84b1483e3\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.579013 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-http\") pod \"382fdc05-9257-41ca-a032-1dc84b1483e3\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.579050 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-grpc\") pod \"382fdc05-9257-41ca-a032-1dc84b1483e3\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.579067 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52vdr\" (UniqueName: \"kubernetes.io/projected/382fdc05-9257-41ca-a032-1dc84b1483e3-kube-api-access-52vdr\") pod \"382fdc05-9257-41ca-a032-1dc84b1483e3\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.579123 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-config\") pod \"382fdc05-9257-41ca-a032-1dc84b1483e3\" (UID: \"382fdc05-9257-41ca-a032-1dc84b1483e3\") " Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.580686 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "382fdc05-9257-41ca-a032-1dc84b1483e3" (UID: "382fdc05-9257-41ca-a032-1dc84b1483e3"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.581219 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-config" (OuterVolumeSpecName: "config") pod "382fdc05-9257-41ca-a032-1dc84b1483e3" (UID: "382fdc05-9257-41ca-a032-1dc84b1483e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.581562 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0" path="/var/lib/kubelet/pods/14b5391a-f4f5-4bda-b1d6-0e7edd1b2ec0/volumes" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.585292 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-query-frontend-grpc") pod "382fdc05-9257-41ca-a032-1dc84b1483e3" (UID: "382fdc05-9257-41ca-a032-1dc84b1483e3"). InnerVolumeSpecName "cloudkitty-lokistack-query-frontend-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.587359 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382fdc05-9257-41ca-a032-1dc84b1483e3-kube-api-access-52vdr" (OuterVolumeSpecName: "kube-api-access-52vdr") pod "382fdc05-9257-41ca-a032-1dc84b1483e3" (UID: "382fdc05-9257-41ca-a032-1dc84b1483e3"). InnerVolumeSpecName "kube-api-access-52vdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.587435 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-http" (OuterVolumeSpecName: "cloudkitty-lokistack-query-frontend-http") pod "382fdc05-9257-41ca-a032-1dc84b1483e3" (UID: "382fdc05-9257-41ca-a032-1dc84b1483e3"). InnerVolumeSpecName "cloudkitty-lokistack-query-frontend-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.682422 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.682486 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.682624 4727 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/382fdc05-9257-41ca-a032-1dc84b1483e3-cloudkitty-lokistack-query-frontend-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.683110 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52vdr\" (UniqueName: \"kubernetes.io/projected/382fdc05-9257-41ca-a032-1dc84b1483e3-kube-api-access-52vdr\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.683139 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382fdc05-9257-41ca-a032-1dc84b1483e3-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.762124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" event={"ID":"2375881f-bb87-45f5-83a1-a314f445945d","Type":"ContainerDied","Data":"14b73f1b4dc02c25006b416dcdeec57aa57e0a26624e26a75fc48fe77ea6bdc1"} Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.762176 4727 scope.go:117] "RemoveContainer" containerID="9c25f8de23aa7cb02b99c56c1218e7f2a68b907c896fcb837b5620184abe3873" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.762303 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-9776w" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.767505 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" event={"ID":"382fdc05-9257-41ca-a032-1dc84b1483e3","Type":"ContainerDied","Data":"8213a2822c6d973b3cefa0b0a672b5be79289d53187435c3f776381f061b7dd5"} Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.767604 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.775065 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"ddf8ff74-065e-4883-87c7-2bf30fccd234","Type":"ContainerStarted","Data":"3d3f0e54476ecc32792f248b00c61e8d63c58045c479fe21fe62ce6fa60dc206"} Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.775108 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"ddf8ff74-065e-4883-87c7-2bf30fccd234","Type":"ContainerStarted","Data":"db19decad37d473738806db99a98c23acc22876f12399bcbb10ef5636dc29dea"} Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.775544 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.786879 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" event={"ID":"b9128ddd-7fb4-41b8-a218-2201b9ef57ad","Type":"ContainerDied","Data":"690c140dde3e1ede7a72b4f56c41d16520655fc826e19f49bad987160c8ae030"} Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.787008 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-hs594" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.790123 4727 scope.go:117] "RemoveContainer" containerID="8a80f8a4651493d653683196e9a9da7d12c13d8d0272e1faf3201d7c6dfddf83" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.835464 4727 scope.go:117] "RemoveContainer" containerID="73168f026b7ee2a0518c131b8a8d944ae49445e3d21af643bce690d59f2f9790" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.838984 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=2.838964478 podStartE2EDuration="2.838964478s" podCreationTimestamp="2025-12-10 15:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:46.817579308 +0000 UTC m=+4511.012353850" watchObservedRunningTime="2025-12-10 15:46:46.838964478 +0000 UTC m=+4511.033739010" Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.853649 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-9776w"] Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.865705 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-9776w"] Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.883267 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-hs594"] Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.894805 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-hs594"] Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.904631 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh"] Dec 10 15:46:46 crc kubenswrapper[4727]: I1210 15:46:46.916022 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-8cdhh"] Dec 10 15:46:47 crc kubenswrapper[4727]: I1210 15:46:47.347029 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:46:47 crc kubenswrapper[4727]: I1210 15:46:47.385431 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="bbecc1f3-56d2-4852-8f9d-4397943c2b8b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:46:48 crc kubenswrapper[4727]: I1210 15:46:48.573993 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2375881f-bb87-45f5-83a1-a314f445945d" path="/var/lib/kubelet/pods/2375881f-bb87-45f5-83a1-a314f445945d/volumes" Dec 10 15:46:48 crc kubenswrapper[4727]: I1210 15:46:48.574983 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="382fdc05-9257-41ca-a032-1dc84b1483e3" path="/var/lib/kubelet/pods/382fdc05-9257-41ca-a032-1dc84b1483e3/volumes" Dec 10 15:46:48 crc kubenswrapper[4727]: I1210 15:46:48.575640 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" path="/var/lib/kubelet/pods/b9128ddd-7fb4-41b8-a218-2201b9ef57ad/volumes" Dec 10 15:46:49 crc kubenswrapper[4727]: E1210 15:46:49.565967 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:46:53 crc kubenswrapper[4727]: E1210 15:46:53.566364 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:46:57 crc kubenswrapper[4727]: I1210 15:46:57.376437 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="bbecc1f3-56d2-4852-8f9d-4397943c2b8b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:47:01 crc kubenswrapper[4727]: E1210 15:47:01.566694 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:47:05 crc kubenswrapper[4727]: I1210 15:47:05.134702 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:47:06 crc kubenswrapper[4727]: E1210 15:47:06.571452 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:47:07 crc kubenswrapper[4727]: I1210 15:47:07.384455 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="bbecc1f3-56d2-4852-8f9d-4397943c2b8b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:47:14 crc kubenswrapper[4727]: E1210 15:47:14.566071 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:47:17 crc kubenswrapper[4727]: I1210 15:47:17.376488 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="bbecc1f3-56d2-4852-8f9d-4397943c2b8b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:47:20 crc kubenswrapper[4727]: E1210 15:47:20.566993 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:47:25 crc kubenswrapper[4727]: E1210 15:47:25.564173 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:47:27 crc kubenswrapper[4727]: I1210 15:47:27.383143 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:47:33 crc kubenswrapper[4727]: E1210 15:47:33.566418 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:47:36 crc kubenswrapper[4727]: E1210 15:47:36.571592 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:47:37 crc kubenswrapper[4727]: I1210 15:47:37.724323 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:47:37 crc kubenswrapper[4727]: I1210 15:47:37.724378 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:47:46 crc kubenswrapper[4727]: E1210 15:47:46.576846 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:47:49 crc kubenswrapper[4727]: E1210 15:47:49.565349 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:48:00 crc kubenswrapper[4727]: E1210 15:48:00.565231 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:48:04 crc kubenswrapper[4727]: E1210 15:48:04.568386 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:48:07 crc kubenswrapper[4727]: I1210 15:48:07.723620 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:48:07 crc kubenswrapper[4727]: I1210 15:48:07.724227 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:48:14 crc kubenswrapper[4727]: E1210 15:48:14.567127 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:48:19 crc kubenswrapper[4727]: E1210 15:48:19.566662 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:48:28 crc kubenswrapper[4727]: E1210 15:48:28.566650 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:48:31 crc kubenswrapper[4727]: E1210 15:48:31.564892 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:48:37 crc kubenswrapper[4727]: I1210 15:48:37.724226 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:48:37 crc kubenswrapper[4727]: I1210 15:48:37.725971 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:48:37 crc kubenswrapper[4727]: I1210 15:48:37.726114 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:48:37 crc kubenswrapper[4727]: I1210 15:48:37.727012 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:48:37 crc kubenswrapper[4727]: I1210 15:48:37.727152 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" gracePeriod=600 Dec 10 15:48:37 crc kubenswrapper[4727]: E1210 15:48:37.861967 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:48:37 crc kubenswrapper[4727]: I1210 15:48:37.996303 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" exitCode=0 Dec 10 15:48:37 crc kubenswrapper[4727]: I1210 15:48:37.996353 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022"} Dec 10 15:48:37 crc kubenswrapper[4727]: I1210 15:48:37.996393 4727 scope.go:117] "RemoveContainer" containerID="19871bb1326002a5b787fe9cdab6775b032466dfefd63b7f0ba220e2030b241f" Dec 10 15:48:37 crc kubenswrapper[4727]: I1210 15:48:37.997247 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:48:37 crc kubenswrapper[4727]: E1210 15:48:37.997551 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:48:43 crc kubenswrapper[4727]: E1210 15:48:43.566418 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:48:46 crc kubenswrapper[4727]: I1210 15:48:46.573423 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:48:46 crc kubenswrapper[4727]: E1210 15:48:46.703749 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:48:46 crc kubenswrapper[4727]: E1210 15:48:46.703839 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:48:46 crc kubenswrapper[4727]: E1210 15:48:46.704071 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:48:46 crc kubenswrapper[4727]: E1210 15:48:46.705313 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:48:50 crc kubenswrapper[4727]: I1210 15:48:50.564058 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:48:50 crc kubenswrapper[4727]: E1210 15:48:50.564857 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:48:58 crc kubenswrapper[4727]: E1210 15:48:58.567075 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.076691 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d7wjn"] Dec 10 15:49:01 crc kubenswrapper[4727]: E1210 15:49:01.077653 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" containerName="loki-distributor" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.077675 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" containerName="loki-distributor" Dec 10 15:49:01 crc kubenswrapper[4727]: E1210 15:49:01.077747 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382fdc05-9257-41ca-a032-1dc84b1483e3" containerName="loki-query-frontend" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.077759 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="382fdc05-9257-41ca-a032-1dc84b1483e3" containerName="loki-query-frontend" Dec 10 15:49:01 crc kubenswrapper[4727]: E1210 15:49:01.077775 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2375881f-bb87-45f5-83a1-a314f445945d" containerName="loki-querier" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.077783 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2375881f-bb87-45f5-83a1-a314f445945d" containerName="loki-querier" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.078108 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="382fdc05-9257-41ca-a032-1dc84b1483e3" containerName="loki-query-frontend" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.078135 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2375881f-bb87-45f5-83a1-a314f445945d" containerName="loki-querier" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.078147 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9128ddd-7fb4-41b8-a218-2201b9ef57ad" containerName="loki-distributor" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.080876 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.088056 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7wjn"] Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.278155 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-utilities\") pod \"certified-operators-d7wjn\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.278234 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-catalog-content\") pod \"certified-operators-d7wjn\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.278508 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tvf7\" (UniqueName: \"kubernetes.io/projected/5ff73958-95d1-40a9-a8fd-39752ea0d813-kube-api-access-8tvf7\") pod \"certified-operators-d7wjn\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.380585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-utilities\") pod \"certified-operators-d7wjn\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.380653 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-catalog-content\") pod \"certified-operators-d7wjn\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.380843 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tvf7\" (UniqueName: \"kubernetes.io/projected/5ff73958-95d1-40a9-a8fd-39752ea0d813-kube-api-access-8tvf7\") pod \"certified-operators-d7wjn\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.381307 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-utilities\") pod \"certified-operators-d7wjn\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.381331 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-catalog-content\") pod \"certified-operators-d7wjn\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.406047 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tvf7\" (UniqueName: \"kubernetes.io/projected/5ff73958-95d1-40a9-a8fd-39752ea0d813-kube-api-access-8tvf7\") pod \"certified-operators-d7wjn\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.407815 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:01 crc kubenswrapper[4727]: E1210 15:49:01.585282 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:49:01 crc kubenswrapper[4727]: I1210 15:49:01.947688 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7wjn"] Dec 10 15:49:01 crc kubenswrapper[4727]: W1210 15:49:01.954149 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ff73958_95d1_40a9_a8fd_39752ea0d813.slice/crio-c15862c2a02e5dd3a28b3b14e485ed9965f984ddb86e33da97fc0dc3a9c72eae WatchSource:0}: Error finding container c15862c2a02e5dd3a28b3b14e485ed9965f984ddb86e33da97fc0dc3a9c72eae: Status 404 returned error can't find the container with id c15862c2a02e5dd3a28b3b14e485ed9965f984ddb86e33da97fc0dc3a9c72eae Dec 10 15:49:02 crc kubenswrapper[4727]: I1210 15:49:02.237924 4727 generic.go:334] "Generic (PLEG): container finished" podID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerID="2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251" exitCode=0 Dec 10 15:49:02 crc kubenswrapper[4727]: I1210 15:49:02.237986 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7wjn" event={"ID":"5ff73958-95d1-40a9-a8fd-39752ea0d813","Type":"ContainerDied","Data":"2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251"} Dec 10 15:49:02 crc kubenswrapper[4727]: I1210 15:49:02.238246 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7wjn" event={"ID":"5ff73958-95d1-40a9-a8fd-39752ea0d813","Type":"ContainerStarted","Data":"c15862c2a02e5dd3a28b3b14e485ed9965f984ddb86e33da97fc0dc3a9c72eae"} Dec 10 15:49:04 crc kubenswrapper[4727]: I1210 15:49:04.565010 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:49:04 crc kubenswrapper[4727]: E1210 15:49:04.566032 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:49:05 crc kubenswrapper[4727]: I1210 15:49:05.274752 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7wjn" event={"ID":"5ff73958-95d1-40a9-a8fd-39752ea0d813","Type":"ContainerStarted","Data":"c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583"} Dec 10 15:49:07 crc kubenswrapper[4727]: I1210 15:49:07.301651 4727 generic.go:334] "Generic (PLEG): container finished" podID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerID="c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583" exitCode=0 Dec 10 15:49:07 crc kubenswrapper[4727]: I1210 15:49:07.301737 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7wjn" event={"ID":"5ff73958-95d1-40a9-a8fd-39752ea0d813","Type":"ContainerDied","Data":"c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583"} Dec 10 15:49:09 crc kubenswrapper[4727]: I1210 15:49:09.328110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7wjn" event={"ID":"5ff73958-95d1-40a9-a8fd-39752ea0d813","Type":"ContainerStarted","Data":"530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9"} Dec 10 15:49:09 crc kubenswrapper[4727]: I1210 15:49:09.357138 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d7wjn" podStartSLOduration=2.154442075 podStartE2EDuration="8.357099746s" podCreationTimestamp="2025-12-10 15:49:01 +0000 UTC" firstStartedPulling="2025-12-10 15:49:02.239554702 +0000 UTC m=+4646.434329254" lastFinishedPulling="2025-12-10 15:49:08.442212373 +0000 UTC m=+4652.636986925" observedRunningTime="2025-12-10 15:49:09.347546775 +0000 UTC m=+4653.542321337" watchObservedRunningTime="2025-12-10 15:49:09.357099746 +0000 UTC m=+4653.551874288" Dec 10 15:49:09 crc kubenswrapper[4727]: E1210 15:49:09.564806 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:49:11 crc kubenswrapper[4727]: I1210 15:49:11.408831 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:11 crc kubenswrapper[4727]: I1210 15:49:11.409501 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:11 crc kubenswrapper[4727]: I1210 15:49:11.466547 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:14 crc kubenswrapper[4727]: E1210 15:49:14.565457 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:49:15 crc kubenswrapper[4727]: I1210 15:49:15.562570 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:49:15 crc kubenswrapper[4727]: E1210 15:49:15.563075 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:49:22 crc kubenswrapper[4727]: I1210 15:49:22.175956 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:22 crc kubenswrapper[4727]: I1210 15:49:22.252659 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7wjn"] Dec 10 15:49:22 crc kubenswrapper[4727]: I1210 15:49:22.480080 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d7wjn" podUID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerName="registry-server" containerID="cri-o://530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9" gracePeriod=2 Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.068421 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.161486 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tvf7\" (UniqueName: \"kubernetes.io/projected/5ff73958-95d1-40a9-a8fd-39752ea0d813-kube-api-access-8tvf7\") pod \"5ff73958-95d1-40a9-a8fd-39752ea0d813\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.161724 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-utilities\") pod \"5ff73958-95d1-40a9-a8fd-39752ea0d813\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.161931 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-catalog-content\") pod \"5ff73958-95d1-40a9-a8fd-39752ea0d813\" (UID: \"5ff73958-95d1-40a9-a8fd-39752ea0d813\") " Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.162844 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-utilities" (OuterVolumeSpecName: "utilities") pod "5ff73958-95d1-40a9-a8fd-39752ea0d813" (UID: "5ff73958-95d1-40a9-a8fd-39752ea0d813"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.170242 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff73958-95d1-40a9-a8fd-39752ea0d813-kube-api-access-8tvf7" (OuterVolumeSpecName: "kube-api-access-8tvf7") pod "5ff73958-95d1-40a9-a8fd-39752ea0d813" (UID: "5ff73958-95d1-40a9-a8fd-39752ea0d813"). InnerVolumeSpecName "kube-api-access-8tvf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.232232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ff73958-95d1-40a9-a8fd-39752ea0d813" (UID: "5ff73958-95d1-40a9-a8fd-39752ea0d813"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.264883 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.264985 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tvf7\" (UniqueName: \"kubernetes.io/projected/5ff73958-95d1-40a9-a8fd-39752ea0d813-kube-api-access-8tvf7\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.265011 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff73958-95d1-40a9-a8fd-39752ea0d813-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.495407 4727 generic.go:334] "Generic (PLEG): container finished" podID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerID="530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9" exitCode=0 Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.495488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7wjn" event={"ID":"5ff73958-95d1-40a9-a8fd-39752ea0d813","Type":"ContainerDied","Data":"530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9"} Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.495510 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7wjn" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.495574 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7wjn" event={"ID":"5ff73958-95d1-40a9-a8fd-39752ea0d813","Type":"ContainerDied","Data":"c15862c2a02e5dd3a28b3b14e485ed9965f984ddb86e33da97fc0dc3a9c72eae"} Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.495606 4727 scope.go:117] "RemoveContainer" containerID="530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.529819 4727 scope.go:117] "RemoveContainer" containerID="c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.535852 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7wjn"] Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.545177 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d7wjn"] Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.589148 4727 scope.go:117] "RemoveContainer" containerID="2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.622294 4727 scope.go:117] "RemoveContainer" containerID="530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9" Dec 10 15:49:23 crc kubenswrapper[4727]: E1210 15:49:23.623363 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9\": container with ID starting with 530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9 not found: ID does not exist" containerID="530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.623413 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9"} err="failed to get container status \"530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9\": rpc error: code = NotFound desc = could not find container \"530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9\": container with ID starting with 530a3baa7a1f7f678a30f0834397fe407c3629921f8363911346f04d8f31dca9 not found: ID does not exist" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.623447 4727 scope.go:117] "RemoveContainer" containerID="c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583" Dec 10 15:49:23 crc kubenswrapper[4727]: E1210 15:49:23.624070 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583\": container with ID starting with c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583 not found: ID does not exist" containerID="c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.624112 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583"} err="failed to get container status \"c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583\": rpc error: code = NotFound desc = could not find container \"c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583\": container with ID starting with c99438c97e20e8be2a422872e2063dc67cf7f7ea69b569824536657f0b6c9583 not found: ID does not exist" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.624133 4727 scope.go:117] "RemoveContainer" containerID="2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251" Dec 10 15:49:23 crc kubenswrapper[4727]: E1210 15:49:23.625242 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251\": container with ID starting with 2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251 not found: ID does not exist" containerID="2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251" Dec 10 15:49:23 crc kubenswrapper[4727]: I1210 15:49:23.625294 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251"} err="failed to get container status \"2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251\": rpc error: code = NotFound desc = could not find container \"2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251\": container with ID starting with 2f97bc547c5bef59c0b228aecf972df4caed590fa2aac562f42185ae49354251 not found: ID does not exist" Dec 10 15:49:23 crc kubenswrapper[4727]: E1210 15:49:23.720137 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:49:23 crc kubenswrapper[4727]: E1210 15:49:23.720622 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:49:23 crc kubenswrapper[4727]: E1210 15:49:23.720840 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:49:23 crc kubenswrapper[4727]: E1210 15:49:23.722640 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:49:24 crc kubenswrapper[4727]: I1210 15:49:24.579746 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff73958-95d1-40a9-a8fd-39752ea0d813" path="/var/lib/kubelet/pods/5ff73958-95d1-40a9-a8fd-39752ea0d813/volumes" Dec 10 15:49:27 crc kubenswrapper[4727]: I1210 15:49:27.563194 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:49:27 crc kubenswrapper[4727]: E1210 15:49:27.564222 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:49:29 crc kubenswrapper[4727]: E1210 15:49:29.564743 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:49:35 crc kubenswrapper[4727]: E1210 15:49:35.566732 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:49:41 crc kubenswrapper[4727]: E1210 15:49:41.565936 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:49:42 crc kubenswrapper[4727]: I1210 15:49:42.563142 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:49:42 crc kubenswrapper[4727]: E1210 15:49:42.563742 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:49:47 crc kubenswrapper[4727]: E1210 15:49:47.565001 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:49:56 crc kubenswrapper[4727]: E1210 15:49:56.575660 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:49:57 crc kubenswrapper[4727]: I1210 15:49:57.563363 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:49:57 crc kubenswrapper[4727]: E1210 15:49:57.563721 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:50:01 crc kubenswrapper[4727]: E1210 15:50:01.565589 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:50:09 crc kubenswrapper[4727]: E1210 15:50:09.567061 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:50:12 crc kubenswrapper[4727]: I1210 15:50:12.562854 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:50:12 crc kubenswrapper[4727]: E1210 15:50:12.563497 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:50:15 crc kubenswrapper[4727]: E1210 15:50:15.566203 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:50:21 crc kubenswrapper[4727]: E1210 15:50:21.567530 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:50:23 crc kubenswrapper[4727]: I1210 15:50:23.646631 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:50:23 crc kubenswrapper[4727]: E1210 15:50:23.647312 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:50:30 crc kubenswrapper[4727]: E1210 15:50:30.565707 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:50:34 crc kubenswrapper[4727]: E1210 15:50:34.567298 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:50:37 crc kubenswrapper[4727]: I1210 15:50:37.563258 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:50:37 crc kubenswrapper[4727]: E1210 15:50:37.564143 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:50:41 crc kubenswrapper[4727]: E1210 15:50:41.565826 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:50:49 crc kubenswrapper[4727]: E1210 15:50:49.641090 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:50:52 crc kubenswrapper[4727]: I1210 15:50:52.563574 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:50:52 crc kubenswrapper[4727]: E1210 15:50:52.564166 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:50:54 crc kubenswrapper[4727]: E1210 15:50:54.564856 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:51:03 crc kubenswrapper[4727]: E1210 15:51:03.566117 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:51:04 crc kubenswrapper[4727]: I1210 15:51:04.563692 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:51:04 crc kubenswrapper[4727]: E1210 15:51:04.564061 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:51:07 crc kubenswrapper[4727]: E1210 15:51:07.566145 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:51:15 crc kubenswrapper[4727]: I1210 15:51:15.563690 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:51:15 crc kubenswrapper[4727]: E1210 15:51:15.564490 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:51:15 crc kubenswrapper[4727]: E1210 15:51:15.566151 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:51:18 crc kubenswrapper[4727]: I1210 15:51:18.954489 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cth8x"] Dec 10 15:51:18 crc kubenswrapper[4727]: E1210 15:51:18.955544 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerName="extract-utilities" Dec 10 15:51:18 crc kubenswrapper[4727]: I1210 15:51:18.955560 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerName="extract-utilities" Dec 10 15:51:18 crc kubenswrapper[4727]: E1210 15:51:18.955599 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerName="extract-content" Dec 10 15:51:18 crc kubenswrapper[4727]: I1210 15:51:18.955605 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerName="extract-content" Dec 10 15:51:18 crc kubenswrapper[4727]: E1210 15:51:18.955619 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerName="registry-server" Dec 10 15:51:18 crc kubenswrapper[4727]: I1210 15:51:18.955625 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerName="registry-server" Dec 10 15:51:18 crc kubenswrapper[4727]: I1210 15:51:18.955830 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff73958-95d1-40a9-a8fd-39752ea0d813" containerName="registry-server" Dec 10 15:51:18 crc kubenswrapper[4727]: I1210 15:51:18.957817 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:18 crc kubenswrapper[4727]: I1210 15:51:18.969133 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cth8x"] Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.051196 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6w44\" (UniqueName: \"kubernetes.io/projected/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-kube-api-access-b6w44\") pod \"redhat-marketplace-cth8x\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.051274 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-catalog-content\") pod \"redhat-marketplace-cth8x\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.051394 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-utilities\") pod \"redhat-marketplace-cth8x\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.153428 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6w44\" (UniqueName: \"kubernetes.io/projected/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-kube-api-access-b6w44\") pod \"redhat-marketplace-cth8x\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.153494 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-catalog-content\") pod \"redhat-marketplace-cth8x\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.153605 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-utilities\") pod \"redhat-marketplace-cth8x\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.154116 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-utilities\") pod \"redhat-marketplace-cth8x\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.154439 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-catalog-content\") pod \"redhat-marketplace-cth8x\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.175672 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6w44\" (UniqueName: \"kubernetes.io/projected/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-kube-api-access-b6w44\") pod \"redhat-marketplace-cth8x\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.284303 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.891962 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cth8x"] Dec 10 15:51:19 crc kubenswrapper[4727]: I1210 15:51:19.926644 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cth8x" event={"ID":"0ff0a7bd-e149-40c7-9774-43e2ca3344ba","Type":"ContainerStarted","Data":"549ef7f07e6b7440501221df8fe581e6e4db7292641d79a512704cc97246bb04"} Dec 10 15:51:20 crc kubenswrapper[4727]: E1210 15:51:20.356442 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ff0a7bd_e149_40c7_9774_43e2ca3344ba.slice/crio-8429484a682ebd904f2cd5ed0b72ff4e19981362e9eb1d6af16f629b8f530343.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ff0a7bd_e149_40c7_9774_43e2ca3344ba.slice/crio-conmon-8429484a682ebd904f2cd5ed0b72ff4e19981362e9eb1d6af16f629b8f530343.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:51:20 crc kubenswrapper[4727]: I1210 15:51:20.937625 4727 generic.go:334] "Generic (PLEG): container finished" podID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerID="8429484a682ebd904f2cd5ed0b72ff4e19981362e9eb1d6af16f629b8f530343" exitCode=0 Dec 10 15:51:20 crc kubenswrapper[4727]: I1210 15:51:20.937675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cth8x" event={"ID":"0ff0a7bd-e149-40c7-9774-43e2ca3344ba","Type":"ContainerDied","Data":"8429484a682ebd904f2cd5ed0b72ff4e19981362e9eb1d6af16f629b8f530343"} Dec 10 15:51:21 crc kubenswrapper[4727]: E1210 15:51:21.564619 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:51:22 crc kubenswrapper[4727]: I1210 15:51:22.970470 4727 generic.go:334] "Generic (PLEG): container finished" podID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerID="b5696c43650fc5abe48bac3fe8f6f7d28ebd5a050972bed91db594787f42a3e3" exitCode=0 Dec 10 15:51:22 crc kubenswrapper[4727]: I1210 15:51:22.970643 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cth8x" event={"ID":"0ff0a7bd-e149-40c7-9774-43e2ca3344ba","Type":"ContainerDied","Data":"b5696c43650fc5abe48bac3fe8f6f7d28ebd5a050972bed91db594787f42a3e3"} Dec 10 15:51:23 crc kubenswrapper[4727]: I1210 15:51:23.982388 4727 generic.go:334] "Generic (PLEG): container finished" podID="9cf54f02-df0d-4ef1-a946-b9a1bc279ac9" containerID="2cb1d04611b2892e22970d3b2ee5b14f2f4653a0163201085dbd9431e494602c" exitCode=2 Dec 10 15:51:23 crc kubenswrapper[4727]: I1210 15:51:23.982482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" event={"ID":"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9","Type":"ContainerDied","Data":"2cb1d04611b2892e22970d3b2ee5b14f2f4653a0163201085dbd9431e494602c"} Dec 10 15:51:23 crc kubenswrapper[4727]: I1210 15:51:23.985311 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cth8x" event={"ID":"0ff0a7bd-e149-40c7-9774-43e2ca3344ba","Type":"ContainerStarted","Data":"59cbc8b8f3f2e2b9318e3e229b7c119ac17bf47563a07bf02a293953559d2390"} Dec 10 15:51:24 crc kubenswrapper[4727]: I1210 15:51:24.020450 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cth8x" podStartSLOduration=3.469027917 podStartE2EDuration="6.020430534s" podCreationTimestamp="2025-12-10 15:51:18 +0000 UTC" firstStartedPulling="2025-12-10 15:51:20.94041434 +0000 UTC m=+4785.135188902" lastFinishedPulling="2025-12-10 15:51:23.491816977 +0000 UTC m=+4787.686591519" observedRunningTime="2025-12-10 15:51:24.019316836 +0000 UTC m=+4788.214091378" watchObservedRunningTime="2025-12-10 15:51:24.020430534 +0000 UTC m=+4788.215205086" Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.569504 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.734121 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-ssh-key\") pod \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.734240 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnhpp\" (UniqueName: \"kubernetes.io/projected/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-kube-api-access-xnhpp\") pod \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.734273 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-inventory\") pod \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\" (UID: \"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9\") " Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.746260 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-kube-api-access-xnhpp" (OuterVolumeSpecName: "kube-api-access-xnhpp") pod "9cf54f02-df0d-4ef1-a946-b9a1bc279ac9" (UID: "9cf54f02-df0d-4ef1-a946-b9a1bc279ac9"). InnerVolumeSpecName "kube-api-access-xnhpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.762824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9cf54f02-df0d-4ef1-a946-b9a1bc279ac9" (UID: "9cf54f02-df0d-4ef1-a946-b9a1bc279ac9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.764193 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-inventory" (OuterVolumeSpecName: "inventory") pod "9cf54f02-df0d-4ef1-a946-b9a1bc279ac9" (UID: "9cf54f02-df0d-4ef1-a946-b9a1bc279ac9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.837401 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnhpp\" (UniqueName: \"kubernetes.io/projected/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-kube-api-access-xnhpp\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.837450 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:25 crc kubenswrapper[4727]: I1210 15:51:25.837460 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf54f02-df0d-4ef1-a946-b9a1bc279ac9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:26 crc kubenswrapper[4727]: I1210 15:51:26.007320 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" event={"ID":"9cf54f02-df0d-4ef1-a946-b9a1bc279ac9","Type":"ContainerDied","Data":"1ffdf8e55db230f3e3baf8c05e5474c5453fa8b2daab728b3d7abeff4e7c6079"} Dec 10 15:51:26 crc kubenswrapper[4727]: I1210 15:51:26.007365 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ffdf8e55db230f3e3baf8c05e5474c5453fa8b2daab728b3d7abeff4e7c6079" Dec 10 15:51:26 crc kubenswrapper[4727]: I1210 15:51:26.007392 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49cgn" Dec 10 15:51:29 crc kubenswrapper[4727]: I1210 15:51:29.285275 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:29 crc kubenswrapper[4727]: I1210 15:51:29.285793 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:29 crc kubenswrapper[4727]: I1210 15:51:29.868309 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:30 crc kubenswrapper[4727]: I1210 15:51:30.098056 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:30 crc kubenswrapper[4727]: I1210 15:51:30.148145 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cth8x"] Dec 10 15:51:30 crc kubenswrapper[4727]: I1210 15:51:30.563376 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:51:30 crc kubenswrapper[4727]: E1210 15:51:30.563976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:51:30 crc kubenswrapper[4727]: E1210 15:51:30.566874 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:51:32 crc kubenswrapper[4727]: I1210 15:51:32.063472 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cth8x" podUID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerName="registry-server" containerID="cri-o://59cbc8b8f3f2e2b9318e3e229b7c119ac17bf47563a07bf02a293953559d2390" gracePeriod=2 Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.075714 4727 generic.go:334] "Generic (PLEG): container finished" podID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerID="59cbc8b8f3f2e2b9318e3e229b7c119ac17bf47563a07bf02a293953559d2390" exitCode=0 Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.075797 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cth8x" event={"ID":"0ff0a7bd-e149-40c7-9774-43e2ca3344ba","Type":"ContainerDied","Data":"59cbc8b8f3f2e2b9318e3e229b7c119ac17bf47563a07bf02a293953559d2390"} Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.076140 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cth8x" event={"ID":"0ff0a7bd-e149-40c7-9774-43e2ca3344ba","Type":"ContainerDied","Data":"549ef7f07e6b7440501221df8fe581e6e4db7292641d79a512704cc97246bb04"} Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.076176 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="549ef7f07e6b7440501221df8fe581e6e4db7292641d79a512704cc97246bb04" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.148040 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.328332 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-utilities\") pod \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.328459 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6w44\" (UniqueName: \"kubernetes.io/projected/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-kube-api-access-b6w44\") pod \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.328809 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-catalog-content\") pod \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\" (UID: \"0ff0a7bd-e149-40c7-9774-43e2ca3344ba\") " Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.329698 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-utilities" (OuterVolumeSpecName: "utilities") pod "0ff0a7bd-e149-40c7-9774-43e2ca3344ba" (UID: "0ff0a7bd-e149-40c7-9774-43e2ca3344ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.334263 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-kube-api-access-b6w44" (OuterVolumeSpecName: "kube-api-access-b6w44") pod "0ff0a7bd-e149-40c7-9774-43e2ca3344ba" (UID: "0ff0a7bd-e149-40c7-9774-43e2ca3344ba"). InnerVolumeSpecName "kube-api-access-b6w44". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.350310 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ff0a7bd-e149-40c7-9774-43e2ca3344ba" (UID: "0ff0a7bd-e149-40c7-9774-43e2ca3344ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.431254 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.431561 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:33.431572 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6w44\" (UniqueName: \"kubernetes.io/projected/0ff0a7bd-e149-40c7-9774-43e2ca3344ba-kube-api-access-b6w44\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:34.085205 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cth8x" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:34.118714 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cth8x"] Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:34.128551 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cth8x"] Dec 10 15:51:34 crc kubenswrapper[4727]: E1210 15:51:34.565791 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:51:34 crc kubenswrapper[4727]: I1210 15:51:34.575721 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" path="/var/lib/kubelet/pods/0ff0a7bd-e149-40c7-9774-43e2ca3344ba/volumes" Dec 10 15:51:41 crc kubenswrapper[4727]: I1210 15:51:41.563886 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:51:41 crc kubenswrapper[4727]: E1210 15:51:41.564592 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:51:41 crc kubenswrapper[4727]: E1210 15:51:41.565620 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:51:49 crc kubenswrapper[4727]: E1210 15:51:49.565780 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:51:52 crc kubenswrapper[4727]: E1210 15:51:52.566353 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:51:56 crc kubenswrapper[4727]: I1210 15:51:56.580032 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:51:56 crc kubenswrapper[4727]: E1210 15:51:56.581206 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:52:03 crc kubenswrapper[4727]: E1210 15:52:03.570329 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:52:04 crc kubenswrapper[4727]: E1210 15:52:04.565437 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:52:08 crc kubenswrapper[4727]: I1210 15:52:08.563409 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:52:08 crc kubenswrapper[4727]: E1210 15:52:08.564354 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:52:15 crc kubenswrapper[4727]: E1210 15:52:15.567039 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:52:17 crc kubenswrapper[4727]: E1210 15:52:17.565916 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:52:22 crc kubenswrapper[4727]: I1210 15:52:22.563942 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:52:22 crc kubenswrapper[4727]: E1210 15:52:22.564724 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:52:27 crc kubenswrapper[4727]: I1210 15:52:27.492515 4727 scope.go:117] "RemoveContainer" containerID="72e10766490a77b74e4c88810cf29a6ede0f5aa7806c7b57950c7dd4650b544f" Dec 10 15:52:27 crc kubenswrapper[4727]: I1210 15:52:27.517687 4727 scope.go:117] "RemoveContainer" containerID="889b6b0ee3858397d2ecb8d7c16ef0a44f29c29044a7a1147d6af3114da80261" Dec 10 15:52:28 crc kubenswrapper[4727]: E1210 15:52:28.565978 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:52:31 crc kubenswrapper[4727]: E1210 15:52:31.566878 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:52:37 crc kubenswrapper[4727]: I1210 15:52:37.563479 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:52:37 crc kubenswrapper[4727]: E1210 15:52:37.564382 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:52:39 crc kubenswrapper[4727]: E1210 15:52:39.566027 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:52:46 crc kubenswrapper[4727]: E1210 15:52:46.651344 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:52:50 crc kubenswrapper[4727]: I1210 15:52:50.564060 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:52:50 crc kubenswrapper[4727]: E1210 15:52:50.564749 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:52:53 crc kubenswrapper[4727]: E1210 15:52:53.565418 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:53:01 crc kubenswrapper[4727]: E1210 15:53:01.567671 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:53:05 crc kubenswrapper[4727]: I1210 15:53:05.563801 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:53:05 crc kubenswrapper[4727]: E1210 15:53:05.565947 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:53:07 crc kubenswrapper[4727]: E1210 15:53:07.565708 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:53:14 crc kubenswrapper[4727]: E1210 15:53:14.564649 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:53:20 crc kubenswrapper[4727]: I1210 15:53:20.564034 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:53:20 crc kubenswrapper[4727]: E1210 15:53:20.565644 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:53:20 crc kubenswrapper[4727]: E1210 15:53:20.566632 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:53:27 crc kubenswrapper[4727]: E1210 15:53:27.705652 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:53:32 crc kubenswrapper[4727]: I1210 15:53:32.563708 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:53:32 crc kubenswrapper[4727]: E1210 15:53:32.564483 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 15:53:32 crc kubenswrapper[4727]: E1210 15:53:32.566053 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:53:39 crc kubenswrapper[4727]: I1210 15:53:39.257458 4727 trace.go:236] Trace[427844838]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (10-Dec-2025 15:53:30.587) (total time: 8669ms): Dec 10 15:53:39 crc kubenswrapper[4727]: Trace[427844838]: [8.669702723s] [8.669702723s] END Dec 10 15:53:39 crc kubenswrapper[4727]: I1210 15:53:39.271889 4727 trace.go:236] Trace[243773226]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (10-Dec-2025 15:53:33.532) (total time: 5738ms): Dec 10 15:53:39 crc kubenswrapper[4727]: Trace[243773226]: [5.738933345s] [5.738933345s] END Dec 10 15:53:39 crc kubenswrapper[4727]: I1210 15:53:39.273019 4727 trace.go:236] Trace[149117880]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (10-Dec-2025 15:53:31.401) (total time: 7871ms): Dec 10 15:53:39 crc kubenswrapper[4727]: Trace[149117880]: [7.871042201s] [7.871042201s] END Dec 10 15:53:42 crc kubenswrapper[4727]: E1210 15:53:42.566694 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:53:47 crc kubenswrapper[4727]: I1210 15:53:47.563428 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:53:48 crc kubenswrapper[4727]: I1210 15:53:48.017991 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"d52d7495ee017e27684a9a64e5411158c01fd0cd114a36152e7e2694730b7222"} Dec 10 15:53:48 crc kubenswrapper[4727]: E1210 15:53:48.565159 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:53:54 crc kubenswrapper[4727]: I1210 15:53:54.569032 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:53:54 crc kubenswrapper[4727]: E1210 15:53:54.675273 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:53:54 crc kubenswrapper[4727]: E1210 15:53:54.675336 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:53:54 crc kubenswrapper[4727]: E1210 15:53:54.675477 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:53:54 crc kubenswrapper[4727]: E1210 15:53:54.676698 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:53:57 crc kubenswrapper[4727]: I1210 15:53:57.812474 4727 trace.go:236] Trace[1835723947]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (10-Dec-2025 15:53:56.627) (total time: 1184ms): Dec 10 15:53:57 crc kubenswrapper[4727]: Trace[1835723947]: [1.184771373s] [1.184771373s] END Dec 10 15:53:57 crc kubenswrapper[4727]: I1210 15:53:57.839434 4727 trace.go:236] Trace[1222960684]: "Calculate volume metrics of storage for pod minio-dev/minio" (10-Dec-2025 15:53:49.399) (total time: 8439ms): Dec 10 15:53:57 crc kubenswrapper[4727]: Trace[1222960684]: [8.439849992s] [8.439849992s] END Dec 10 15:53:59 crc kubenswrapper[4727]: E1210 15:53:59.696805 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:54:08 crc kubenswrapper[4727]: E1210 15:54:08.566655 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:54:12 crc kubenswrapper[4727]: E1210 15:54:12.567258 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:54:19 crc kubenswrapper[4727]: E1210 15:54:19.565759 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:54:27 crc kubenswrapper[4727]: E1210 15:54:27.304028 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:54:27 crc kubenswrapper[4727]: E1210 15:54:27.304638 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:54:27 crc kubenswrapper[4727]: E1210 15:54:27.304816 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:54:27 crc kubenswrapper[4727]: E1210 15:54:27.306050 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:54:31 crc kubenswrapper[4727]: E1210 15:54:31.565225 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:54:41 crc kubenswrapper[4727]: E1210 15:54:41.567761 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:54:46 crc kubenswrapper[4727]: E1210 15:54:46.575086 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:54:56 crc kubenswrapper[4727]: E1210 15:54:56.573077 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:54:59 crc kubenswrapper[4727]: E1210 15:54:59.565788 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:55:07 crc kubenswrapper[4727]: E1210 15:55:07.565296 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.100322 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bzv2v"] Dec 10 15:55:12 crc kubenswrapper[4727]: E1210 15:55:12.101240 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf54f02-df0d-4ef1-a946-b9a1bc279ac9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.101265 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf54f02-df0d-4ef1-a946-b9a1bc279ac9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:55:12 crc kubenswrapper[4727]: E1210 15:55:12.101294 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerName="extract-utilities" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.101302 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerName="extract-utilities" Dec 10 15:55:12 crc kubenswrapper[4727]: E1210 15:55:12.101345 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerName="extract-content" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.101353 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerName="extract-content" Dec 10 15:55:12 crc kubenswrapper[4727]: E1210 15:55:12.101364 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerName="registry-server" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.101371 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerName="registry-server" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.101611 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff0a7bd-e149-40c7-9774-43e2ca3344ba" containerName="registry-server" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.101690 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf54f02-df0d-4ef1-a946-b9a1bc279ac9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.105632 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.119685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h929c\" (UniqueName: \"kubernetes.io/projected/c6038dec-aca6-4466-9d26-b06a00f7458f-kube-api-access-h929c\") pod \"redhat-operators-bzv2v\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.119847 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-catalog-content\") pod \"redhat-operators-bzv2v\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.119892 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-utilities\") pod \"redhat-operators-bzv2v\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.125871 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzv2v"] Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.226479 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-catalog-content\") pod \"redhat-operators-bzv2v\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.226582 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-utilities\") pod \"redhat-operators-bzv2v\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.227299 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-catalog-content\") pod \"redhat-operators-bzv2v\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.227335 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h929c\" (UniqueName: \"kubernetes.io/projected/c6038dec-aca6-4466-9d26-b06a00f7458f-kube-api-access-h929c\") pod \"redhat-operators-bzv2v\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.231522 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-utilities\") pod \"redhat-operators-bzv2v\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.257235 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h929c\" (UniqueName: \"kubernetes.io/projected/c6038dec-aca6-4466-9d26-b06a00f7458f-kube-api-access-h929c\") pod \"redhat-operators-bzv2v\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.442630 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:12 crc kubenswrapper[4727]: I1210 15:55:12.939429 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzv2v"] Dec 10 15:55:12 crc kubenswrapper[4727]: W1210 15:55:12.948343 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6038dec_aca6_4466_9d26_b06a00f7458f.slice/crio-0d2b97894a7a7378dac617aaef624d7b80ab8c9772afdd8f39958e7e9538a031 WatchSource:0}: Error finding container 0d2b97894a7a7378dac617aaef624d7b80ab8c9772afdd8f39958e7e9538a031: Status 404 returned error can't find the container with id 0d2b97894a7a7378dac617aaef624d7b80ab8c9772afdd8f39958e7e9538a031 Dec 10 15:55:13 crc kubenswrapper[4727]: I1210 15:55:13.572495 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzv2v" event={"ID":"c6038dec-aca6-4466-9d26-b06a00f7458f","Type":"ContainerStarted","Data":"a7af12cd4ba42311485fb0e82260610538b5c4322eb4bad486ac92152e6e7c56"} Dec 10 15:55:13 crc kubenswrapper[4727]: I1210 15:55:13.572565 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzv2v" event={"ID":"c6038dec-aca6-4466-9d26-b06a00f7458f","Type":"ContainerStarted","Data":"0d2b97894a7a7378dac617aaef624d7b80ab8c9772afdd8f39958e7e9538a031"} Dec 10 15:55:14 crc kubenswrapper[4727]: E1210 15:55:14.564541 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:55:14 crc kubenswrapper[4727]: I1210 15:55:14.583131 4727 generic.go:334] "Generic (PLEG): container finished" podID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerID="a7af12cd4ba42311485fb0e82260610538b5c4322eb4bad486ac92152e6e7c56" exitCode=0 Dec 10 15:55:14 crc kubenswrapper[4727]: I1210 15:55:14.583183 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzv2v" event={"ID":"c6038dec-aca6-4466-9d26-b06a00f7458f","Type":"ContainerDied","Data":"a7af12cd4ba42311485fb0e82260610538b5c4322eb4bad486ac92152e6e7c56"} Dec 10 15:55:15 crc kubenswrapper[4727]: I1210 15:55:15.596641 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzv2v" event={"ID":"c6038dec-aca6-4466-9d26-b06a00f7458f","Type":"ContainerStarted","Data":"11d93c1c39199c22f5cf7b7302e0a5789e9d60af966f7603943d0a0db0b3bd65"} Dec 10 15:55:17 crc kubenswrapper[4727]: I1210 15:55:17.621981 4727 generic.go:334] "Generic (PLEG): container finished" podID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerID="11d93c1c39199c22f5cf7b7302e0a5789e9d60af966f7603943d0a0db0b3bd65" exitCode=0 Dec 10 15:55:17 crc kubenswrapper[4727]: I1210 15:55:17.622044 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzv2v" event={"ID":"c6038dec-aca6-4466-9d26-b06a00f7458f","Type":"ContainerDied","Data":"11d93c1c39199c22f5cf7b7302e0a5789e9d60af966f7603943d0a0db0b3bd65"} Dec 10 15:55:20 crc kubenswrapper[4727]: E1210 15:55:20.565308 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:55:20 crc kubenswrapper[4727]: I1210 15:55:20.654188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzv2v" event={"ID":"c6038dec-aca6-4466-9d26-b06a00f7458f","Type":"ContainerStarted","Data":"39f232f540ae2ade45f1b2ca7231d7970c98ee10626086b872a5cae9331e0613"} Dec 10 15:55:20 crc kubenswrapper[4727]: I1210 15:55:20.674084 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bzv2v" podStartSLOduration=2.856548435 podStartE2EDuration="8.674045111s" podCreationTimestamp="2025-12-10 15:55:12 +0000 UTC" firstStartedPulling="2025-12-10 15:55:14.585726632 +0000 UTC m=+5018.780501174" lastFinishedPulling="2025-12-10 15:55:20.403223308 +0000 UTC m=+5024.597997850" observedRunningTime="2025-12-10 15:55:20.671410405 +0000 UTC m=+5024.866184947" watchObservedRunningTime="2025-12-10 15:55:20.674045111 +0000 UTC m=+5024.868819653" Dec 10 15:55:22 crc kubenswrapper[4727]: I1210 15:55:22.442861 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:22 crc kubenswrapper[4727]: I1210 15:55:22.443253 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:23 crc kubenswrapper[4727]: I1210 15:55:23.495066 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bzv2v" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerName="registry-server" probeResult="failure" output=< Dec 10 15:55:23 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 15:55:23 crc kubenswrapper[4727]: > Dec 10 15:55:28 crc kubenswrapper[4727]: E1210 15:55:28.566697 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:55:32 crc kubenswrapper[4727]: I1210 15:55:32.500769 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:32 crc kubenswrapper[4727]: I1210 15:55:32.553759 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:32 crc kubenswrapper[4727]: E1210 15:55:32.567096 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:55:32 crc kubenswrapper[4727]: I1210 15:55:32.741857 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzv2v"] Dec 10 15:55:33 crc kubenswrapper[4727]: I1210 15:55:33.806177 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bzv2v" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerName="registry-server" containerID="cri-o://39f232f540ae2ade45f1b2ca7231d7970c98ee10626086b872a5cae9331e0613" gracePeriod=2 Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.819011 4727 generic.go:334] "Generic (PLEG): container finished" podID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerID="39f232f540ae2ade45f1b2ca7231d7970c98ee10626086b872a5cae9331e0613" exitCode=0 Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.819987 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzv2v" event={"ID":"c6038dec-aca6-4466-9d26-b06a00f7458f","Type":"ContainerDied","Data":"39f232f540ae2ade45f1b2ca7231d7970c98ee10626086b872a5cae9331e0613"} Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.820236 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzv2v" event={"ID":"c6038dec-aca6-4466-9d26-b06a00f7458f","Type":"ContainerDied","Data":"0d2b97894a7a7378dac617aaef624d7b80ab8c9772afdd8f39958e7e9538a031"} Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.820321 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2b97894a7a7378dac617aaef624d7b80ab8c9772afdd8f39958e7e9538a031" Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.837092 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.962288 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-catalog-content\") pod \"c6038dec-aca6-4466-9d26-b06a00f7458f\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.962446 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h929c\" (UniqueName: \"kubernetes.io/projected/c6038dec-aca6-4466-9d26-b06a00f7458f-kube-api-access-h929c\") pod \"c6038dec-aca6-4466-9d26-b06a00f7458f\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.962503 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-utilities\") pod \"c6038dec-aca6-4466-9d26-b06a00f7458f\" (UID: \"c6038dec-aca6-4466-9d26-b06a00f7458f\") " Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.963468 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-utilities" (OuterVolumeSpecName: "utilities") pod "c6038dec-aca6-4466-9d26-b06a00f7458f" (UID: "c6038dec-aca6-4466-9d26-b06a00f7458f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:55:34 crc kubenswrapper[4727]: I1210 15:55:34.970623 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6038dec-aca6-4466-9d26-b06a00f7458f-kube-api-access-h929c" (OuterVolumeSpecName: "kube-api-access-h929c") pod "c6038dec-aca6-4466-9d26-b06a00f7458f" (UID: "c6038dec-aca6-4466-9d26-b06a00f7458f"). InnerVolumeSpecName "kube-api-access-h929c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:55:35 crc kubenswrapper[4727]: I1210 15:55:35.065989 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h929c\" (UniqueName: \"kubernetes.io/projected/c6038dec-aca6-4466-9d26-b06a00f7458f-kube-api-access-h929c\") on node \"crc\" DevicePath \"\"" Dec 10 15:55:35 crc kubenswrapper[4727]: I1210 15:55:35.066032 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:55:35 crc kubenswrapper[4727]: I1210 15:55:35.090066 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6038dec-aca6-4466-9d26-b06a00f7458f" (UID: "c6038dec-aca6-4466-9d26-b06a00f7458f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:55:35 crc kubenswrapper[4727]: I1210 15:55:35.168495 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6038dec-aca6-4466-9d26-b06a00f7458f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:55:35 crc kubenswrapper[4727]: I1210 15:55:35.827995 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzv2v" Dec 10 15:55:35 crc kubenswrapper[4727]: I1210 15:55:35.862774 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzv2v"] Dec 10 15:55:35 crc kubenswrapper[4727]: I1210 15:55:35.872225 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bzv2v"] Dec 10 15:55:36 crc kubenswrapper[4727]: I1210 15:55:36.582084 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" path="/var/lib/kubelet/pods/c6038dec-aca6-4466-9d26-b06a00f7458f/volumes" Dec 10 15:55:42 crc kubenswrapper[4727]: E1210 15:55:42.565773 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:55:47 crc kubenswrapper[4727]: E1210 15:55:47.566739 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:55:53 crc kubenswrapper[4727]: E1210 15:55:53.565150 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:55:58 crc kubenswrapper[4727]: E1210 15:55:58.565444 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:56:07 crc kubenswrapper[4727]: I1210 15:56:07.724307 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:56:07 crc kubenswrapper[4727]: I1210 15:56:07.725311 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:56:08 crc kubenswrapper[4727]: E1210 15:56:08.564975 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:56:12 crc kubenswrapper[4727]: E1210 15:56:12.565061 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:56:21 crc kubenswrapper[4727]: E1210 15:56:21.566336 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:56:26 crc kubenswrapper[4727]: E1210 15:56:26.574322 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:56:35 crc kubenswrapper[4727]: E1210 15:56:35.567089 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:56:37 crc kubenswrapper[4727]: I1210 15:56:37.723862 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:56:37 crc kubenswrapper[4727]: I1210 15:56:37.724268 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:56:38 crc kubenswrapper[4727]: E1210 15:56:38.567973 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.033092 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp"] Dec 10 15:56:43 crc kubenswrapper[4727]: E1210 15:56:43.034929 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerName="registry-server" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.034951 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerName="registry-server" Dec 10 15:56:43 crc kubenswrapper[4727]: E1210 15:56:43.034991 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerName="extract-utilities" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.035000 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerName="extract-utilities" Dec 10 15:56:43 crc kubenswrapper[4727]: E1210 15:56:43.035179 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerName="extract-content" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.035194 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerName="extract-content" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.035510 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6038dec-aca6-4466-9d26-b06a00f7458f" containerName="registry-server" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.036736 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.042637 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.042856 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.043004 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.043110 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j82js" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.063160 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp"] Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.091331 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s98mp\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.091391 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s98mp\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.091561 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr5gd\" (UniqueName: \"kubernetes.io/projected/a08d6fc9-6282-40ef-9c63-99655ea0444c-kube-api-access-qr5gd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s98mp\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.193356 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s98mp\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.193408 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s98mp\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.193517 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr5gd\" (UniqueName: \"kubernetes.io/projected/a08d6fc9-6282-40ef-9c63-99655ea0444c-kube-api-access-qr5gd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s98mp\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.727794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s98mp\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.727859 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s98mp\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.737712 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr5gd\" (UniqueName: \"kubernetes.io/projected/a08d6fc9-6282-40ef-9c63-99655ea0444c-kube-api-access-qr5gd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-s98mp\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:43 crc kubenswrapper[4727]: I1210 15:56:43.961658 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 15:56:44 crc kubenswrapper[4727]: I1210 15:56:44.529162 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp"] Dec 10 15:56:44 crc kubenswrapper[4727]: I1210 15:56:44.575563 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" event={"ID":"a08d6fc9-6282-40ef-9c63-99655ea0444c","Type":"ContainerStarted","Data":"b8d1d441d74a24896522aefce2b931fc22f16af11ed2aecd9146976eeb9976d8"} Dec 10 15:56:46 crc kubenswrapper[4727]: I1210 15:56:46.611981 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" event={"ID":"a08d6fc9-6282-40ef-9c63-99655ea0444c","Type":"ContainerStarted","Data":"a502c2ce1192a654af86f298574d899549fcf545f47d0808cdbffec3be922069"} Dec 10 15:56:46 crc kubenswrapper[4727]: I1210 15:56:46.643513 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" podStartSLOduration=2.656391653 podStartE2EDuration="3.643480929s" podCreationTimestamp="2025-12-10 15:56:43 +0000 UTC" firstStartedPulling="2025-12-10 15:56:44.532964777 +0000 UTC m=+5108.727739319" lastFinishedPulling="2025-12-10 15:56:45.520054023 +0000 UTC m=+5109.714828595" observedRunningTime="2025-12-10 15:56:46.630974713 +0000 UTC m=+5110.825749255" watchObservedRunningTime="2025-12-10 15:56:46.643480929 +0000 UTC m=+5110.838255481" Dec 10 15:56:49 crc kubenswrapper[4727]: E1210 15:56:49.565010 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:56:52 crc kubenswrapper[4727]: E1210 15:56:52.565765 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:57:04 crc kubenswrapper[4727]: E1210 15:57:04.565338 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:57:06 crc kubenswrapper[4727]: E1210 15:57:06.578768 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:57:07 crc kubenswrapper[4727]: I1210 15:57:07.724475 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:57:07 crc kubenswrapper[4727]: I1210 15:57:07.724797 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:57:07 crc kubenswrapper[4727]: I1210 15:57:07.724838 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 15:57:07 crc kubenswrapper[4727]: I1210 15:57:07.726444 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d52d7495ee017e27684a9a64e5411158c01fd0cd114a36152e7e2694730b7222"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:57:07 crc kubenswrapper[4727]: I1210 15:57:07.726508 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://d52d7495ee017e27684a9a64e5411158c01fd0cd114a36152e7e2694730b7222" gracePeriod=600 Dec 10 15:57:08 crc kubenswrapper[4727]: I1210 15:57:08.837490 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="d52d7495ee017e27684a9a64e5411158c01fd0cd114a36152e7e2694730b7222" exitCode=0 Dec 10 15:57:08 crc kubenswrapper[4727]: I1210 15:57:08.837576 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"d52d7495ee017e27684a9a64e5411158c01fd0cd114a36152e7e2694730b7222"} Dec 10 15:57:08 crc kubenswrapper[4727]: I1210 15:57:08.837822 4727 scope.go:117] "RemoveContainer" containerID="afa61f477514923785187830637fc92cc0264175562c7cfe19268182daab6022" Dec 10 15:57:09 crc kubenswrapper[4727]: I1210 15:57:09.850334 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363"} Dec 10 15:57:19 crc kubenswrapper[4727]: E1210 15:57:19.567864 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:57:19 crc kubenswrapper[4727]: E1210 15:57:19.567938 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:57:27 crc kubenswrapper[4727]: I1210 15:57:27.797855 4727 scope.go:117] "RemoveContainer" containerID="59cbc8b8f3f2e2b9318e3e229b7c119ac17bf47563a07bf02a293953559d2390" Dec 10 15:57:27 crc kubenswrapper[4727]: I1210 15:57:27.822073 4727 scope.go:117] "RemoveContainer" containerID="b5696c43650fc5abe48bac3fe8f6f7d28ebd5a050972bed91db594787f42a3e3" Dec 10 15:57:27 crc kubenswrapper[4727]: I1210 15:57:27.860197 4727 scope.go:117] "RemoveContainer" containerID="8429484a682ebd904f2cd5ed0b72ff4e19981362e9eb1d6af16f629b8f530343" Dec 10 15:57:30 crc kubenswrapper[4727]: E1210 15:57:30.565757 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:57:32 crc kubenswrapper[4727]: E1210 15:57:32.564970 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.117828 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4kpdd"] Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.122388 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.128275 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4kpdd"] Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.150465 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69g29\" (UniqueName: \"kubernetes.io/projected/84d9238c-c667-44e8-8de8-cd00f5e3446e-kube-api-access-69g29\") pod \"community-operators-4kpdd\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.150553 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-utilities\") pod \"community-operators-4kpdd\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.150586 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-catalog-content\") pod \"community-operators-4kpdd\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.252713 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69g29\" (UniqueName: \"kubernetes.io/projected/84d9238c-c667-44e8-8de8-cd00f5e3446e-kube-api-access-69g29\") pod \"community-operators-4kpdd\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.252803 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-utilities\") pod \"community-operators-4kpdd\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.252840 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-catalog-content\") pod \"community-operators-4kpdd\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.253330 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-utilities\") pod \"community-operators-4kpdd\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.253372 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-catalog-content\") pod \"community-operators-4kpdd\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.272521 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69g29\" (UniqueName: \"kubernetes.io/projected/84d9238c-c667-44e8-8de8-cd00f5e3446e-kube-api-access-69g29\") pod \"community-operators-4kpdd\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:37 crc kubenswrapper[4727]: I1210 15:57:37.448263 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:38 crc kubenswrapper[4727]: I1210 15:57:38.018254 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4kpdd"] Dec 10 15:57:38 crc kubenswrapper[4727]: I1210 15:57:38.152353 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kpdd" event={"ID":"84d9238c-c667-44e8-8de8-cd00f5e3446e","Type":"ContainerStarted","Data":"3526fa9acc830ac15153cf15c766047cda9b9487a2f5204e09d2b4b13137499e"} Dec 10 15:57:39 crc kubenswrapper[4727]: I1210 15:57:39.182465 4727 generic.go:334] "Generic (PLEG): container finished" podID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerID="e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260" exitCode=0 Dec 10 15:57:39 crc kubenswrapper[4727]: I1210 15:57:39.182649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kpdd" event={"ID":"84d9238c-c667-44e8-8de8-cd00f5e3446e","Type":"ContainerDied","Data":"e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260"} Dec 10 15:57:40 crc kubenswrapper[4727]: I1210 15:57:40.196250 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kpdd" event={"ID":"84d9238c-c667-44e8-8de8-cd00f5e3446e","Type":"ContainerStarted","Data":"fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7"} Dec 10 15:57:41 crc kubenswrapper[4727]: I1210 15:57:41.212237 4727 generic.go:334] "Generic (PLEG): container finished" podID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerID="fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7" exitCode=0 Dec 10 15:57:41 crc kubenswrapper[4727]: I1210 15:57:41.212293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kpdd" event={"ID":"84d9238c-c667-44e8-8de8-cd00f5e3446e","Type":"ContainerDied","Data":"fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7"} Dec 10 15:57:42 crc kubenswrapper[4727]: I1210 15:57:42.223403 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kpdd" event={"ID":"84d9238c-c667-44e8-8de8-cd00f5e3446e","Type":"ContainerStarted","Data":"0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d"} Dec 10 15:57:42 crc kubenswrapper[4727]: I1210 15:57:42.257564 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4kpdd" podStartSLOduration=2.656208556 podStartE2EDuration="5.257477301s" podCreationTimestamp="2025-12-10 15:57:37 +0000 UTC" firstStartedPulling="2025-12-10 15:57:39.185050998 +0000 UTC m=+5163.379825540" lastFinishedPulling="2025-12-10 15:57:41.786319723 +0000 UTC m=+5165.981094285" observedRunningTime="2025-12-10 15:57:42.247076039 +0000 UTC m=+5166.441850581" watchObservedRunningTime="2025-12-10 15:57:42.257477301 +0000 UTC m=+5166.452251863" Dec 10 15:57:44 crc kubenswrapper[4727]: E1210 15:57:44.565787 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:57:47 crc kubenswrapper[4727]: I1210 15:57:47.448448 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:47 crc kubenswrapper[4727]: I1210 15:57:47.448834 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:47 crc kubenswrapper[4727]: I1210 15:57:47.496026 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:47 crc kubenswrapper[4727]: E1210 15:57:47.566670 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:57:48 crc kubenswrapper[4727]: I1210 15:57:48.326951 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:48 crc kubenswrapper[4727]: I1210 15:57:48.406247 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4kpdd"] Dec 10 15:57:50 crc kubenswrapper[4727]: I1210 15:57:50.303323 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4kpdd" podUID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerName="registry-server" containerID="cri-o://0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d" gracePeriod=2 Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.306448 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.317840 4727 generic.go:334] "Generic (PLEG): container finished" podID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerID="0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d" exitCode=0 Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.317882 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kpdd" event={"ID":"84d9238c-c667-44e8-8de8-cd00f5e3446e","Type":"ContainerDied","Data":"0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d"} Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.317928 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4kpdd" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.317949 4727 scope.go:117] "RemoveContainer" containerID="0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.317935 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kpdd" event={"ID":"84d9238c-c667-44e8-8de8-cd00f5e3446e","Type":"ContainerDied","Data":"3526fa9acc830ac15153cf15c766047cda9b9487a2f5204e09d2b4b13137499e"} Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.364108 4727 scope.go:117] "RemoveContainer" containerID="fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.401025 4727 scope.go:117] "RemoveContainer" containerID="e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.485112 4727 scope.go:117] "RemoveContainer" containerID="0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d" Dec 10 15:57:51 crc kubenswrapper[4727]: E1210 15:57:51.485630 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d\": container with ID starting with 0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d not found: ID does not exist" containerID="0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.485678 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d"} err="failed to get container status \"0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d\": rpc error: code = NotFound desc = could not find container \"0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d\": container with ID starting with 0b3fbd0142b60f549fc381465f27a4429326c40b1ca0a12bb3bf7821c9c8542d not found: ID does not exist" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.485708 4727 scope.go:117] "RemoveContainer" containerID="fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7" Dec 10 15:57:51 crc kubenswrapper[4727]: E1210 15:57:51.486213 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7\": container with ID starting with fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7 not found: ID does not exist" containerID="fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.486246 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7"} err="failed to get container status \"fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7\": rpc error: code = NotFound desc = could not find container \"fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7\": container with ID starting with fb5c295a0d843bca18db462b2caf4ac1b39703c6d7f60773cb4239c0251cfec7 not found: ID does not exist" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.486264 4727 scope.go:117] "RemoveContainer" containerID="e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260" Dec 10 15:57:51 crc kubenswrapper[4727]: E1210 15:57:51.486796 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260\": container with ID starting with e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260 not found: ID does not exist" containerID="e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.486845 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260"} err="failed to get container status \"e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260\": rpc error: code = NotFound desc = could not find container \"e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260\": container with ID starting with e84cdd3dcd830bb4bbe86fcc277ffd3fcaf793e5db8dfb215c96795cbc914260 not found: ID does not exist" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.500099 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-utilities\") pod \"84d9238c-c667-44e8-8de8-cd00f5e3446e\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.500313 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69g29\" (UniqueName: \"kubernetes.io/projected/84d9238c-c667-44e8-8de8-cd00f5e3446e-kube-api-access-69g29\") pod \"84d9238c-c667-44e8-8de8-cd00f5e3446e\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.500521 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-catalog-content\") pod \"84d9238c-c667-44e8-8de8-cd00f5e3446e\" (UID: \"84d9238c-c667-44e8-8de8-cd00f5e3446e\") " Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.501325 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-utilities" (OuterVolumeSpecName: "utilities") pod "84d9238c-c667-44e8-8de8-cd00f5e3446e" (UID: "84d9238c-c667-44e8-8de8-cd00f5e3446e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.501492 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.505892 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d9238c-c667-44e8-8de8-cd00f5e3446e-kube-api-access-69g29" (OuterVolumeSpecName: "kube-api-access-69g29") pod "84d9238c-c667-44e8-8de8-cd00f5e3446e" (UID: "84d9238c-c667-44e8-8de8-cd00f5e3446e"). InnerVolumeSpecName "kube-api-access-69g29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.560653 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84d9238c-c667-44e8-8de8-cd00f5e3446e" (UID: "84d9238c-c667-44e8-8de8-cd00f5e3446e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.604950 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d9238c-c667-44e8-8de8-cd00f5e3446e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.604985 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69g29\" (UniqueName: \"kubernetes.io/projected/84d9238c-c667-44e8-8de8-cd00f5e3446e-kube-api-access-69g29\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.664570 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4kpdd"] Dec 10 15:57:51 crc kubenswrapper[4727]: I1210 15:57:51.675707 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4kpdd"] Dec 10 15:57:52 crc kubenswrapper[4727]: I1210 15:57:52.575818 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d9238c-c667-44e8-8de8-cd00f5e3446e" path="/var/lib/kubelet/pods/84d9238c-c667-44e8-8de8-cd00f5e3446e/volumes" Dec 10 15:57:55 crc kubenswrapper[4727]: E1210 15:57:55.565107 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:57:59 crc kubenswrapper[4727]: E1210 15:57:59.567283 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:58:09 crc kubenswrapper[4727]: E1210 15:58:09.566051 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:58:10 crc kubenswrapper[4727]: E1210 15:58:10.565234 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:58:23 crc kubenswrapper[4727]: E1210 15:58:23.565768 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:58:24 crc kubenswrapper[4727]: E1210 15:58:24.566906 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:58:35 crc kubenswrapper[4727]: E1210 15:58:35.565526 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:58:37 crc kubenswrapper[4727]: E1210 15:58:37.564791 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:58:49 crc kubenswrapper[4727]: E1210 15:58:49.573599 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:58:50 crc kubenswrapper[4727]: E1210 15:58:50.565920 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:59:00 crc kubenswrapper[4727]: E1210 15:59:00.565451 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:59:01 crc kubenswrapper[4727]: I1210 15:59:01.565386 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:59:01 crc kubenswrapper[4727]: E1210 15:59:01.693773 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:59:01 crc kubenswrapper[4727]: E1210 15:59:01.693852 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:59:01 crc kubenswrapper[4727]: E1210 15:59:01.694057 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:59:01 crc kubenswrapper[4727]: E1210 15:59:01.695299 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.418609 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dbnrg"] Dec 10 15:59:10 crc kubenswrapper[4727]: E1210 15:59:10.419727 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerName="extract-utilities" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.419756 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerName="extract-utilities" Dec 10 15:59:10 crc kubenswrapper[4727]: E1210 15:59:10.419790 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerName="registry-server" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.419798 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerName="registry-server" Dec 10 15:59:10 crc kubenswrapper[4727]: E1210 15:59:10.419823 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerName="extract-content" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.419831 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerName="extract-content" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.420170 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d9238c-c667-44e8-8de8-cd00f5e3446e" containerName="registry-server" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.422243 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.439918 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbnrg"] Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.517317 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vghpl\" (UniqueName: \"kubernetes.io/projected/50df53b2-e308-4f73-808b-fed950712814-kube-api-access-vghpl\") pod \"certified-operators-dbnrg\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.517397 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-utilities\") pod \"certified-operators-dbnrg\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.517420 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-catalog-content\") pod \"certified-operators-dbnrg\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.620961 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-utilities\") pod \"certified-operators-dbnrg\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.621060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-catalog-content\") pod \"certified-operators-dbnrg\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.621365 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vghpl\" (UniqueName: \"kubernetes.io/projected/50df53b2-e308-4f73-808b-fed950712814-kube-api-access-vghpl\") pod \"certified-operators-dbnrg\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.621537 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-utilities\") pod \"certified-operators-dbnrg\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.621734 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-catalog-content\") pod \"certified-operators-dbnrg\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.650240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vghpl\" (UniqueName: \"kubernetes.io/projected/50df53b2-e308-4f73-808b-fed950712814-kube-api-access-vghpl\") pod \"certified-operators-dbnrg\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:10 crc kubenswrapper[4727]: I1210 15:59:10.743754 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:11 crc kubenswrapper[4727]: I1210 15:59:11.266962 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbnrg"] Dec 10 15:59:11 crc kubenswrapper[4727]: I1210 15:59:11.365353 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbnrg" event={"ID":"50df53b2-e308-4f73-808b-fed950712814","Type":"ContainerStarted","Data":"6a335ee7d4f8c7f75e07203113cd8a4ae14128faf94e0e567651becd6be9cdfc"} Dec 10 15:59:12 crc kubenswrapper[4727]: I1210 15:59:12.375504 4727 generic.go:334] "Generic (PLEG): container finished" podID="50df53b2-e308-4f73-808b-fed950712814" containerID="0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa" exitCode=0 Dec 10 15:59:12 crc kubenswrapper[4727]: I1210 15:59:12.375603 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbnrg" event={"ID":"50df53b2-e308-4f73-808b-fed950712814","Type":"ContainerDied","Data":"0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa"} Dec 10 15:59:13 crc kubenswrapper[4727]: I1210 15:59:13.387832 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbnrg" event={"ID":"50df53b2-e308-4f73-808b-fed950712814","Type":"ContainerStarted","Data":"e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180"} Dec 10 15:59:13 crc kubenswrapper[4727]: E1210 15:59:13.565743 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:59:15 crc kubenswrapper[4727]: I1210 15:59:15.408602 4727 generic.go:334] "Generic (PLEG): container finished" podID="50df53b2-e308-4f73-808b-fed950712814" containerID="e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180" exitCode=0 Dec 10 15:59:15 crc kubenswrapper[4727]: I1210 15:59:15.408674 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbnrg" event={"ID":"50df53b2-e308-4f73-808b-fed950712814","Type":"ContainerDied","Data":"e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180"} Dec 10 15:59:15 crc kubenswrapper[4727]: E1210 15:59:15.565319 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:59:17 crc kubenswrapper[4727]: I1210 15:59:17.469972 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbnrg" event={"ID":"50df53b2-e308-4f73-808b-fed950712814","Type":"ContainerStarted","Data":"7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996"} Dec 10 15:59:17 crc kubenswrapper[4727]: I1210 15:59:17.502759 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dbnrg" podStartSLOduration=3.710071605 podStartE2EDuration="7.502739834s" podCreationTimestamp="2025-12-10 15:59:10 +0000 UTC" firstStartedPulling="2025-12-10 15:59:12.377521561 +0000 UTC m=+5256.572296103" lastFinishedPulling="2025-12-10 15:59:16.17018979 +0000 UTC m=+5260.364964332" observedRunningTime="2025-12-10 15:59:17.500013494 +0000 UTC m=+5261.694788036" watchObservedRunningTime="2025-12-10 15:59:17.502739834 +0000 UTC m=+5261.697514376" Dec 10 15:59:20 crc kubenswrapper[4727]: I1210 15:59:20.744090 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:20 crc kubenswrapper[4727]: I1210 15:59:20.744693 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:20 crc kubenswrapper[4727]: I1210 15:59:20.797530 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:21 crc kubenswrapper[4727]: I1210 15:59:21.567493 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:21 crc kubenswrapper[4727]: I1210 15:59:21.619762 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dbnrg"] Dec 10 15:59:23 crc kubenswrapper[4727]: I1210 15:59:23.530813 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dbnrg" podUID="50df53b2-e308-4f73-808b-fed950712814" containerName="registry-server" containerID="cri-o://7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996" gracePeriod=2 Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.050032 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.133636 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-catalog-content\") pod \"50df53b2-e308-4f73-808b-fed950712814\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.133747 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-utilities\") pod \"50df53b2-e308-4f73-808b-fed950712814\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.134025 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vghpl\" (UniqueName: \"kubernetes.io/projected/50df53b2-e308-4f73-808b-fed950712814-kube-api-access-vghpl\") pod \"50df53b2-e308-4f73-808b-fed950712814\" (UID: \"50df53b2-e308-4f73-808b-fed950712814\") " Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.135326 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-utilities" (OuterVolumeSpecName: "utilities") pod "50df53b2-e308-4f73-808b-fed950712814" (UID: "50df53b2-e308-4f73-808b-fed950712814"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.143518 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50df53b2-e308-4f73-808b-fed950712814-kube-api-access-vghpl" (OuterVolumeSpecName: "kube-api-access-vghpl") pod "50df53b2-e308-4f73-808b-fed950712814" (UID: "50df53b2-e308-4f73-808b-fed950712814"). InnerVolumeSpecName "kube-api-access-vghpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.184783 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50df53b2-e308-4f73-808b-fed950712814" (UID: "50df53b2-e308-4f73-808b-fed950712814"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.237038 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vghpl\" (UniqueName: \"kubernetes.io/projected/50df53b2-e308-4f73-808b-fed950712814-kube-api-access-vghpl\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.237089 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.237103 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50df53b2-e308-4f73-808b-fed950712814-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.543729 4727 generic.go:334] "Generic (PLEG): container finished" podID="50df53b2-e308-4f73-808b-fed950712814" containerID="7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996" exitCode=0 Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.543804 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbnrg" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.543843 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbnrg" event={"ID":"50df53b2-e308-4f73-808b-fed950712814","Type":"ContainerDied","Data":"7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996"} Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.544930 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbnrg" event={"ID":"50df53b2-e308-4f73-808b-fed950712814","Type":"ContainerDied","Data":"6a335ee7d4f8c7f75e07203113cd8a4ae14128faf94e0e567651becd6be9cdfc"} Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.544960 4727 scope.go:117] "RemoveContainer" containerID="7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.572558 4727 scope.go:117] "RemoveContainer" containerID="e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.592470 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dbnrg"] Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.605028 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dbnrg"] Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.625291 4727 scope.go:117] "RemoveContainer" containerID="0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.686177 4727 scope.go:117] "RemoveContainer" containerID="7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996" Dec 10 15:59:24 crc kubenswrapper[4727]: E1210 15:59:24.686651 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996\": container with ID starting with 7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996 not found: ID does not exist" containerID="7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.686705 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996"} err="failed to get container status \"7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996\": rpc error: code = NotFound desc = could not find container \"7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996\": container with ID starting with 7f943ac2e2ca223955fbed50a9982bba31ec65f79a6972c41d571f139547d996 not found: ID does not exist" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.686736 4727 scope.go:117] "RemoveContainer" containerID="e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180" Dec 10 15:59:24 crc kubenswrapper[4727]: E1210 15:59:24.687232 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180\": container with ID starting with e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180 not found: ID does not exist" containerID="e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.687278 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180"} err="failed to get container status \"e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180\": rpc error: code = NotFound desc = could not find container \"e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180\": container with ID starting with e1f85a73947869015378bb829f6b4ef1d8e537d0a43ad7d37e45a734b54bd180 not found: ID does not exist" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.687307 4727 scope.go:117] "RemoveContainer" containerID="0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa" Dec 10 15:59:24 crc kubenswrapper[4727]: E1210 15:59:24.687656 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa\": container with ID starting with 0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa not found: ID does not exist" containerID="0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa" Dec 10 15:59:24 crc kubenswrapper[4727]: I1210 15:59:24.687709 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa"} err="failed to get container status \"0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa\": rpc error: code = NotFound desc = could not find container \"0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa\": container with ID starting with 0970bcdf5dbc6454085e145793df4b3a98ddfac1a45595902f3bfcc76d06b2aa not found: ID does not exist" Dec 10 15:59:24 crc kubenswrapper[4727]: E1210 15:59:24.791227 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50df53b2_e308_4f73_808b_fed950712814.slice/crio-6a335ee7d4f8c7f75e07203113cd8a4ae14128faf94e0e567651becd6be9cdfc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50df53b2_e308_4f73_808b_fed950712814.slice\": RecentStats: unable to find data in memory cache]" Dec 10 15:59:26 crc kubenswrapper[4727]: E1210 15:59:26.578048 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:59:26 crc kubenswrapper[4727]: I1210 15:59:26.581219 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50df53b2-e308-4f73-808b-fed950712814" path="/var/lib/kubelet/pods/50df53b2-e308-4f73-808b-fed950712814/volumes" Dec 10 15:59:28 crc kubenswrapper[4727]: E1210 15:59:28.690937 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:59:28 crc kubenswrapper[4727]: E1210 15:59:28.691458 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:59:28 crc kubenswrapper[4727]: E1210 15:59:28.691622 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:59:28 crc kubenswrapper[4727]: E1210 15:59:28.692823 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:59:37 crc kubenswrapper[4727]: I1210 15:59:37.724162 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:59:37 crc kubenswrapper[4727]: I1210 15:59:37.725482 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:59:38 crc kubenswrapper[4727]: E1210 15:59:38.565106 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:59:43 crc kubenswrapper[4727]: E1210 15:59:43.566869 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 15:59:52 crc kubenswrapper[4727]: E1210 15:59:52.565372 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 15:59:56 crc kubenswrapper[4727]: E1210 15:59:56.574340 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.156478 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx"] Dec 10 16:00:00 crc kubenswrapper[4727]: E1210 16:00:00.157667 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50df53b2-e308-4f73-808b-fed950712814" containerName="extract-utilities" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.157690 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="50df53b2-e308-4f73-808b-fed950712814" containerName="extract-utilities" Dec 10 16:00:00 crc kubenswrapper[4727]: E1210 16:00:00.157761 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50df53b2-e308-4f73-808b-fed950712814" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.157770 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="50df53b2-e308-4f73-808b-fed950712814" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4727]: E1210 16:00:00.157802 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50df53b2-e308-4f73-808b-fed950712814" containerName="extract-content" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.157810 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="50df53b2-e308-4f73-808b-fed950712814" containerName="extract-content" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.158106 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="50df53b2-e308-4f73-808b-fed950712814" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.159308 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.162815 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.165341 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.174311 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx"] Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.237877 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99c5x\" (UniqueName: \"kubernetes.io/projected/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-kube-api-access-99c5x\") pod \"collect-profiles-29423040-trgrx\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.238160 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-config-volume\") pod \"collect-profiles-29423040-trgrx\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.238230 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-secret-volume\") pod \"collect-profiles-29423040-trgrx\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.341216 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-config-volume\") pod \"collect-profiles-29423040-trgrx\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.341367 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-secret-volume\") pod \"collect-profiles-29423040-trgrx\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.341479 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99c5x\" (UniqueName: \"kubernetes.io/projected/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-kube-api-access-99c5x\") pod \"collect-profiles-29423040-trgrx\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.342265 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-config-volume\") pod \"collect-profiles-29423040-trgrx\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.627798 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-secret-volume\") pod \"collect-profiles-29423040-trgrx\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.627980 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99c5x\" (UniqueName: \"kubernetes.io/projected/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-kube-api-access-99c5x\") pod \"collect-profiles-29423040-trgrx\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:00 crc kubenswrapper[4727]: I1210 16:00:00.793184 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:01 crc kubenswrapper[4727]: I1210 16:00:01.273573 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx"] Dec 10 16:00:01 crc kubenswrapper[4727]: W1210 16:00:01.276045 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3beb04c8_689b_4ec5_88de_b965d0cdf4ac.slice/crio-c8a07697b346fef84bf5fca5df537a031bc037774093cdee2d8e10d663ea99eb WatchSource:0}: Error finding container c8a07697b346fef84bf5fca5df537a031bc037774093cdee2d8e10d663ea99eb: Status 404 returned error can't find the container with id c8a07697b346fef84bf5fca5df537a031bc037774093cdee2d8e10d663ea99eb Dec 10 16:00:01 crc kubenswrapper[4727]: I1210 16:00:01.961236 4727 generic.go:334] "Generic (PLEG): container finished" podID="3beb04c8-689b-4ec5-88de-b965d0cdf4ac" containerID="78d7ea399007142a4c68d0347375dd605e57d48d46cc58bcffdd9323580351d7" exitCode=0 Dec 10 16:00:01 crc kubenswrapper[4727]: I1210 16:00:01.961344 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" event={"ID":"3beb04c8-689b-4ec5-88de-b965d0cdf4ac","Type":"ContainerDied","Data":"78d7ea399007142a4c68d0347375dd605e57d48d46cc58bcffdd9323580351d7"} Dec 10 16:00:01 crc kubenswrapper[4727]: I1210 16:00:01.961520 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" event={"ID":"3beb04c8-689b-4ec5-88de-b965d0cdf4ac","Type":"ContainerStarted","Data":"c8a07697b346fef84bf5fca5df537a031bc037774093cdee2d8e10d663ea99eb"} Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.445484 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:03 crc kubenswrapper[4727]: E1210 16:00:03.565142 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.591331 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-config-volume\") pod \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.591588 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99c5x\" (UniqueName: \"kubernetes.io/projected/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-kube-api-access-99c5x\") pod \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.591728 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-secret-volume\") pod \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\" (UID: \"3beb04c8-689b-4ec5-88de-b965d0cdf4ac\") " Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.592218 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-config-volume" (OuterVolumeSpecName: "config-volume") pod "3beb04c8-689b-4ec5-88de-b965d0cdf4ac" (UID: "3beb04c8-689b-4ec5-88de-b965d0cdf4ac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.600155 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3beb04c8-689b-4ec5-88de-b965d0cdf4ac" (UID: "3beb04c8-689b-4ec5-88de-b965d0cdf4ac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.600474 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-kube-api-access-99c5x" (OuterVolumeSpecName: "kube-api-access-99c5x") pod "3beb04c8-689b-4ec5-88de-b965d0cdf4ac" (UID: "3beb04c8-689b-4ec5-88de-b965d0cdf4ac"). InnerVolumeSpecName "kube-api-access-99c5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.695178 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.695221 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99c5x\" (UniqueName: \"kubernetes.io/projected/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-kube-api-access-99c5x\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:03 crc kubenswrapper[4727]: I1210 16:00:03.695233 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3beb04c8-689b-4ec5-88de-b965d0cdf4ac-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:04 crc kubenswrapper[4727]: I1210 16:00:04.015406 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" event={"ID":"3beb04c8-689b-4ec5-88de-b965d0cdf4ac","Type":"ContainerDied","Data":"c8a07697b346fef84bf5fca5df537a031bc037774093cdee2d8e10d663ea99eb"} Dec 10 16:00:04 crc kubenswrapper[4727]: I1210 16:00:04.015727 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8a07697b346fef84bf5fca5df537a031bc037774093cdee2d8e10d663ea99eb" Dec 10 16:00:04 crc kubenswrapper[4727]: I1210 16:00:04.015514 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-trgrx" Dec 10 16:00:04 crc kubenswrapper[4727]: I1210 16:00:04.528284 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4"] Dec 10 16:00:04 crc kubenswrapper[4727]: I1210 16:00:04.538893 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-gtzc4"] Dec 10 16:00:04 crc kubenswrapper[4727]: I1210 16:00:04.576174 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d51a83-b933-4957-8890-e7141b841a89" path="/var/lib/kubelet/pods/d5d51a83-b933-4957-8890-e7141b841a89/volumes" Dec 10 16:00:07 crc kubenswrapper[4727]: I1210 16:00:07.723849 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:00:07 crc kubenswrapper[4727]: I1210 16:00:07.724545 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:00:10 crc kubenswrapper[4727]: E1210 16:00:10.565538 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:00:17 crc kubenswrapper[4727]: E1210 16:00:17.566051 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:00:23 crc kubenswrapper[4727]: E1210 16:00:23.567172 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:00:28 crc kubenswrapper[4727]: I1210 16:00:28.018097 4727 scope.go:117] "RemoveContainer" containerID="104f5c00a6433b70ee0fbdb99cbd1a921826a10a6cf0ddf89482527bad47e496" Dec 10 16:00:30 crc kubenswrapper[4727]: E1210 16:00:30.565440 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:00:37 crc kubenswrapper[4727]: I1210 16:00:37.723773 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:00:37 crc kubenswrapper[4727]: I1210 16:00:37.724390 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:00:37 crc kubenswrapper[4727]: I1210 16:00:37.724456 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 16:00:37 crc kubenswrapper[4727]: I1210 16:00:37.725598 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:00:37 crc kubenswrapper[4727]: I1210 16:00:37.725673 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" gracePeriod=600 Dec 10 16:00:37 crc kubenswrapper[4727]: E1210 16:00:37.853074 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:00:38 crc kubenswrapper[4727]: I1210 16:00:38.539001 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" exitCode=0 Dec 10 16:00:38 crc kubenswrapper[4727]: I1210 16:00:38.539072 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363"} Dec 10 16:00:38 crc kubenswrapper[4727]: I1210 16:00:38.539347 4727 scope.go:117] "RemoveContainer" containerID="d52d7495ee017e27684a9a64e5411158c01fd0cd114a36152e7e2694730b7222" Dec 10 16:00:38 crc kubenswrapper[4727]: I1210 16:00:38.540218 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:00:38 crc kubenswrapper[4727]: E1210 16:00:38.540591 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:00:38 crc kubenswrapper[4727]: E1210 16:00:38.611718 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:00:41 crc kubenswrapper[4727]: E1210 16:00:41.565694 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:00:49 crc kubenswrapper[4727]: I1210 16:00:49.564267 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:00:49 crc kubenswrapper[4727]: E1210 16:00:49.565214 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:00:50 crc kubenswrapper[4727]: E1210 16:00:50.565762 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:00:52 crc kubenswrapper[4727]: E1210 16:00:52.566055 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.185278 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29423041-9xj8q"] Dec 10 16:01:00 crc kubenswrapper[4727]: E1210 16:01:00.186361 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3beb04c8-689b-4ec5-88de-b965d0cdf4ac" containerName="collect-profiles" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.186378 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3beb04c8-689b-4ec5-88de-b965d0cdf4ac" containerName="collect-profiles" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.186648 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3beb04c8-689b-4ec5-88de-b965d0cdf4ac" containerName="collect-profiles" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.187647 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.199503 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-fernet-keys\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.199600 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-combined-ca-bundle\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.199658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-config-data\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.199792 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtmn9\" (UniqueName: \"kubernetes.io/projected/a614b5ac-cc30-4106-882b-7089e9fba8b7-kube-api-access-xtmn9\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.218935 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29423041-9xj8q"] Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.304251 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-config-data\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.304642 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtmn9\" (UniqueName: \"kubernetes.io/projected/a614b5ac-cc30-4106-882b-7089e9fba8b7-kube-api-access-xtmn9\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.304683 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-fernet-keys\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.304736 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-combined-ca-bundle\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.314704 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-fernet-keys\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.320155 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-combined-ca-bundle\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.331815 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-config-data\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.359710 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtmn9\" (UniqueName: \"kubernetes.io/projected/a614b5ac-cc30-4106-882b-7089e9fba8b7-kube-api-access-xtmn9\") pod \"keystone-cron-29423041-9xj8q\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:00 crc kubenswrapper[4727]: I1210 16:01:00.518102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:01 crc kubenswrapper[4727]: I1210 16:01:01.007036 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29423041-9xj8q"] Dec 10 16:01:01 crc kubenswrapper[4727]: I1210 16:01:01.783886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-9xj8q" event={"ID":"a614b5ac-cc30-4106-882b-7089e9fba8b7","Type":"ContainerStarted","Data":"518d31e405e965059f7e3a2ba202e3193bab87a0e13014db1086118dd33384f4"} Dec 10 16:01:01 crc kubenswrapper[4727]: I1210 16:01:01.783958 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-9xj8q" event={"ID":"a614b5ac-cc30-4106-882b-7089e9fba8b7","Type":"ContainerStarted","Data":"20dd7a4fcfc1e7b28685cf5af078c7bebc27b9df35435a6d38e18a0630acc035"} Dec 10 16:01:01 crc kubenswrapper[4727]: I1210 16:01:01.809109 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29423041-9xj8q" podStartSLOduration=1.809067707 podStartE2EDuration="1.809067707s" podCreationTimestamp="2025-12-10 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 16:01:01.804232498 +0000 UTC m=+5365.999007060" watchObservedRunningTime="2025-12-10 16:01:01.809067707 +0000 UTC m=+5366.003842249" Dec 10 16:01:02 crc kubenswrapper[4727]: I1210 16:01:02.564322 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:01:02 crc kubenswrapper[4727]: E1210 16:01:02.564896 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:01:02 crc kubenswrapper[4727]: E1210 16:01:02.565798 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:01:04 crc kubenswrapper[4727]: I1210 16:01:04.816413 4727 generic.go:334] "Generic (PLEG): container finished" podID="a614b5ac-cc30-4106-882b-7089e9fba8b7" containerID="518d31e405e965059f7e3a2ba202e3193bab87a0e13014db1086118dd33384f4" exitCode=0 Dec 10 16:01:04 crc kubenswrapper[4727]: I1210 16:01:04.816501 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-9xj8q" event={"ID":"a614b5ac-cc30-4106-882b-7089e9fba8b7","Type":"ContainerDied","Data":"518d31e405e965059f7e3a2ba202e3193bab87a0e13014db1086118dd33384f4"} Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.228573 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.240204 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-fernet-keys\") pod \"a614b5ac-cc30-4106-882b-7089e9fba8b7\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.240353 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-combined-ca-bundle\") pod \"a614b5ac-cc30-4106-882b-7089e9fba8b7\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.240458 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-config-data\") pod \"a614b5ac-cc30-4106-882b-7089e9fba8b7\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.240551 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtmn9\" (UniqueName: \"kubernetes.io/projected/a614b5ac-cc30-4106-882b-7089e9fba8b7-kube-api-access-xtmn9\") pod \"a614b5ac-cc30-4106-882b-7089e9fba8b7\" (UID: \"a614b5ac-cc30-4106-882b-7089e9fba8b7\") " Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.252656 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a614b5ac-cc30-4106-882b-7089e9fba8b7-kube-api-access-xtmn9" (OuterVolumeSpecName: "kube-api-access-xtmn9") pod "a614b5ac-cc30-4106-882b-7089e9fba8b7" (UID: "a614b5ac-cc30-4106-882b-7089e9fba8b7"). InnerVolumeSpecName "kube-api-access-xtmn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.257048 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a614b5ac-cc30-4106-882b-7089e9fba8b7" (UID: "a614b5ac-cc30-4106-882b-7089e9fba8b7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.275617 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a614b5ac-cc30-4106-882b-7089e9fba8b7" (UID: "a614b5ac-cc30-4106-882b-7089e9fba8b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.311389 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-config-data" (OuterVolumeSpecName: "config-data") pod "a614b5ac-cc30-4106-882b-7089e9fba8b7" (UID: "a614b5ac-cc30-4106-882b-7089e9fba8b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.343037 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.343072 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtmn9\" (UniqueName: \"kubernetes.io/projected/a614b5ac-cc30-4106-882b-7089e9fba8b7-kube-api-access-xtmn9\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.343101 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.343110 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a614b5ac-cc30-4106-882b-7089e9fba8b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:06 crc kubenswrapper[4727]: E1210 16:01:06.573044 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.837068 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-9xj8q" event={"ID":"a614b5ac-cc30-4106-882b-7089e9fba8b7","Type":"ContainerDied","Data":"20dd7a4fcfc1e7b28685cf5af078c7bebc27b9df35435a6d38e18a0630acc035"} Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.837107 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-9xj8q" Dec 10 16:01:06 crc kubenswrapper[4727]: I1210 16:01:06.837124 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20dd7a4fcfc1e7b28685cf5af078c7bebc27b9df35435a6d38e18a0630acc035" Dec 10 16:01:13 crc kubenswrapper[4727]: E1210 16:01:13.565880 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:01:15 crc kubenswrapper[4727]: I1210 16:01:15.562868 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:01:15 crc kubenswrapper[4727]: E1210 16:01:15.563518 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:01:21 crc kubenswrapper[4727]: E1210 16:01:21.565181 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:01:27 crc kubenswrapper[4727]: E1210 16:01:27.566125 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:01:28 crc kubenswrapper[4727]: I1210 16:01:28.102007 4727 scope.go:117] "RemoveContainer" containerID="11d93c1c39199c22f5cf7b7302e0a5789e9d60af966f7603943d0a0db0b3bd65" Dec 10 16:01:28 crc kubenswrapper[4727]: I1210 16:01:28.131555 4727 scope.go:117] "RemoveContainer" containerID="a7af12cd4ba42311485fb0e82260610538b5c4322eb4bad486ac92152e6e7c56" Dec 10 16:01:28 crc kubenswrapper[4727]: I1210 16:01:28.218460 4727 scope.go:117] "RemoveContainer" containerID="39f232f540ae2ade45f1b2ca7231d7970c98ee10626086b872a5cae9331e0613" Dec 10 16:01:28 crc kubenswrapper[4727]: I1210 16:01:28.563301 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:01:28 crc kubenswrapper[4727]: E1210 16:01:28.563597 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:01:36 crc kubenswrapper[4727]: E1210 16:01:36.573180 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:01:41 crc kubenswrapper[4727]: I1210 16:01:41.564103 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:01:41 crc kubenswrapper[4727]: E1210 16:01:41.565991 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:01:42 crc kubenswrapper[4727]: E1210 16:01:42.566217 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:01:50 crc kubenswrapper[4727]: E1210 16:01:50.565637 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:01:55 crc kubenswrapper[4727]: I1210 16:01:55.563541 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:01:55 crc kubenswrapper[4727]: E1210 16:01:55.564425 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:01:57 crc kubenswrapper[4727]: E1210 16:01:57.566497 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:02:01 crc kubenswrapper[4727]: E1210 16:02:01.565714 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:02:06 crc kubenswrapper[4727]: I1210 16:02:06.570386 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:02:06 crc kubenswrapper[4727]: E1210 16:02:06.571255 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:02:11 crc kubenswrapper[4727]: E1210 16:02:11.565653 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:02:13 crc kubenswrapper[4727]: E1210 16:02:13.564872 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:02:21 crc kubenswrapper[4727]: I1210 16:02:21.563304 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:02:21 crc kubenswrapper[4727]: E1210 16:02:21.564201 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:02:25 crc kubenswrapper[4727]: E1210 16:02:25.566008 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:02:25 crc kubenswrapper[4727]: E1210 16:02:25.566046 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:02:34 crc kubenswrapper[4727]: I1210 16:02:34.564176 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:02:34 crc kubenswrapper[4727]: E1210 16:02:34.566158 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:02:37 crc kubenswrapper[4727]: E1210 16:02:37.564893 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:02:38 crc kubenswrapper[4727]: E1210 16:02:38.565520 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.309035 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tlvqk/must-gather-kvlgm"] Dec 10 16:02:43 crc kubenswrapper[4727]: E1210 16:02:43.310220 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a614b5ac-cc30-4106-882b-7089e9fba8b7" containerName="keystone-cron" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.310239 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a614b5ac-cc30-4106-882b-7089e9fba8b7" containerName="keystone-cron" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.310588 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a614b5ac-cc30-4106-882b-7089e9fba8b7" containerName="keystone-cron" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.312356 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.319655 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tlvqk"/"openshift-service-ca.crt" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.320022 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tlvqk"/"kube-root-ca.crt" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.331170 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tlvqk/must-gather-kvlgm"] Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.405834 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqcq\" (UniqueName: \"kubernetes.io/projected/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-kube-api-access-hjqcq\") pod \"must-gather-kvlgm\" (UID: \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\") " pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.406348 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-must-gather-output\") pod \"must-gather-kvlgm\" (UID: \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\") " pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.508271 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-must-gather-output\") pod \"must-gather-kvlgm\" (UID: \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\") " pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.508375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqcq\" (UniqueName: \"kubernetes.io/projected/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-kube-api-access-hjqcq\") pod \"must-gather-kvlgm\" (UID: \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\") " pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.508767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-must-gather-output\") pod \"must-gather-kvlgm\" (UID: \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\") " pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.533755 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqcq\" (UniqueName: \"kubernetes.io/projected/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-kube-api-access-hjqcq\") pod \"must-gather-kvlgm\" (UID: \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\") " pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:02:43 crc kubenswrapper[4727]: I1210 16:02:43.632636 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:02:44 crc kubenswrapper[4727]: I1210 16:02:44.164115 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tlvqk/must-gather-kvlgm"] Dec 10 16:02:44 crc kubenswrapper[4727]: I1210 16:02:44.333142 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" event={"ID":"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5","Type":"ContainerStarted","Data":"ecff789b108d70a37ababfb16642ded26aea80a1a12d743cae03673bfd599fd9"} Dec 10 16:02:45 crc kubenswrapper[4727]: I1210 16:02:45.563239 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:02:45 crc kubenswrapper[4727]: E1210 16:02:45.563818 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:02:49 crc kubenswrapper[4727]: E1210 16:02:49.566669 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:02:53 crc kubenswrapper[4727]: E1210 16:02:53.577936 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:02:54 crc kubenswrapper[4727]: I1210 16:02:54.435088 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" event={"ID":"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5","Type":"ContainerStarted","Data":"8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12"} Dec 10 16:02:54 crc kubenswrapper[4727]: I1210 16:02:54.435344 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" event={"ID":"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5","Type":"ContainerStarted","Data":"4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f"} Dec 10 16:02:54 crc kubenswrapper[4727]: I1210 16:02:54.456626 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" podStartSLOduration=2.081446663 podStartE2EDuration="11.45659785s" podCreationTimestamp="2025-12-10 16:02:43 +0000 UTC" firstStartedPulling="2025-12-10 16:02:44.180520348 +0000 UTC m=+5468.375294890" lastFinishedPulling="2025-12-10 16:02:53.555671535 +0000 UTC m=+5477.750446077" observedRunningTime="2025-12-10 16:02:54.451949323 +0000 UTC m=+5478.646723875" watchObservedRunningTime="2025-12-10 16:02:54.45659785 +0000 UTC m=+5478.651372392" Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.564193 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:02:59 crc kubenswrapper[4727]: E1210 16:02:59.564963 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.771569 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tlvqk/crc-debug-rblnq"] Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.773166 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.776052 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tlvqk"/"default-dockercfg-s8xfr" Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.801650 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-host\") pod \"crc-debug-rblnq\" (UID: \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\") " pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.801724 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pssg5\" (UniqueName: \"kubernetes.io/projected/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-kube-api-access-pssg5\") pod \"crc-debug-rblnq\" (UID: \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\") " pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.904736 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-host\") pod \"crc-debug-rblnq\" (UID: \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\") " pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.904820 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pssg5\" (UniqueName: \"kubernetes.io/projected/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-kube-api-access-pssg5\") pod \"crc-debug-rblnq\" (UID: \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\") " pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.904992 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-host\") pod \"crc-debug-rblnq\" (UID: \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\") " pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:02:59 crc kubenswrapper[4727]: I1210 16:02:59.944753 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pssg5\" (UniqueName: \"kubernetes.io/projected/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-kube-api-access-pssg5\") pod \"crc-debug-rblnq\" (UID: \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\") " pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:03:00 crc kubenswrapper[4727]: I1210 16:03:00.094273 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:03:00 crc kubenswrapper[4727]: I1210 16:03:00.501007 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tlvqk/crc-debug-rblnq" event={"ID":"bbbb3151-1d33-4f85-b9cf-6e30882bb60e","Type":"ContainerStarted","Data":"4d3c41e9d4ad0cd2924f20bc64c2d0e9338ab29355e946f2655058d5039dcea9"} Dec 10 16:03:01 crc kubenswrapper[4727]: E1210 16:03:01.631970 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:03:03 crc kubenswrapper[4727]: I1210 16:03:03.534436 4727 generic.go:334] "Generic (PLEG): container finished" podID="a08d6fc9-6282-40ef-9c63-99655ea0444c" containerID="a502c2ce1192a654af86f298574d899549fcf545f47d0808cdbffec3be922069" exitCode=2 Dec 10 16:03:03 crc kubenswrapper[4727]: I1210 16:03:03.534491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" event={"ID":"a08d6fc9-6282-40ef-9c63-99655ea0444c","Type":"ContainerDied","Data":"a502c2ce1192a654af86f298574d899549fcf545f47d0808cdbffec3be922069"} Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.196014 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.331938 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr5gd\" (UniqueName: \"kubernetes.io/projected/a08d6fc9-6282-40ef-9c63-99655ea0444c-kube-api-access-qr5gd\") pod \"a08d6fc9-6282-40ef-9c63-99655ea0444c\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.332032 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-ssh-key\") pod \"a08d6fc9-6282-40ef-9c63-99655ea0444c\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.332157 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-inventory\") pod \"a08d6fc9-6282-40ef-9c63-99655ea0444c\" (UID: \"a08d6fc9-6282-40ef-9c63-99655ea0444c\") " Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.342081 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a08d6fc9-6282-40ef-9c63-99655ea0444c-kube-api-access-qr5gd" (OuterVolumeSpecName: "kube-api-access-qr5gd") pod "a08d6fc9-6282-40ef-9c63-99655ea0444c" (UID: "a08d6fc9-6282-40ef-9c63-99655ea0444c"). InnerVolumeSpecName "kube-api-access-qr5gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.375395 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a08d6fc9-6282-40ef-9c63-99655ea0444c" (UID: "a08d6fc9-6282-40ef-9c63-99655ea0444c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.375441 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-inventory" (OuterVolumeSpecName: "inventory") pod "a08d6fc9-6282-40ef-9c63-99655ea0444c" (UID: "a08d6fc9-6282-40ef-9c63-99655ea0444c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.435195 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr5gd\" (UniqueName: \"kubernetes.io/projected/a08d6fc9-6282-40ef-9c63-99655ea0444c-kube-api-access-qr5gd\") on node \"crc\" DevicePath \"\"" Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.435238 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.435251 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a08d6fc9-6282-40ef-9c63-99655ea0444c-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 16:03:05 crc kubenswrapper[4727]: E1210 16:03:05.565842 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.570560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" event={"ID":"a08d6fc9-6282-40ef-9c63-99655ea0444c","Type":"ContainerDied","Data":"b8d1d441d74a24896522aefce2b931fc22f16af11ed2aecd9146976eeb9976d8"} Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.570614 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8d1d441d74a24896522aefce2b931fc22f16af11ed2aecd9146976eeb9976d8" Dec 10 16:03:05 crc kubenswrapper[4727]: I1210 16:03:05.570681 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-s98mp" Dec 10 16:03:12 crc kubenswrapper[4727]: I1210 16:03:12.564657 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:03:12 crc kubenswrapper[4727]: E1210 16:03:12.566370 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:03:13 crc kubenswrapper[4727]: I1210 16:03:13.664738 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tlvqk/crc-debug-rblnq" event={"ID":"bbbb3151-1d33-4f85-b9cf-6e30882bb60e","Type":"ContainerStarted","Data":"7ff014e82b7909e228240e0b82054fc85f635fe65ee8d7a31dcfdf9e4ca8b945"} Dec 10 16:03:13 crc kubenswrapper[4727]: I1210 16:03:13.688343 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tlvqk/crc-debug-rblnq" podStartSLOduration=1.640736977 podStartE2EDuration="14.688314445s" podCreationTimestamp="2025-12-10 16:02:59 +0000 UTC" firstStartedPulling="2025-12-10 16:03:00.147291712 +0000 UTC m=+5484.342066254" lastFinishedPulling="2025-12-10 16:03:13.19486918 +0000 UTC m=+5497.389643722" observedRunningTime="2025-12-10 16:03:13.682537269 +0000 UTC m=+5497.877311811" watchObservedRunningTime="2025-12-10 16:03:13.688314445 +0000 UTC m=+5497.883088987" Dec 10 16:03:16 crc kubenswrapper[4727]: E1210 16:03:16.576016 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:03:18 crc kubenswrapper[4727]: E1210 16:03:18.567434 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:03:24 crc kubenswrapper[4727]: I1210 16:03:24.563337 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:03:24 crc kubenswrapper[4727]: E1210 16:03:24.564242 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:03:31 crc kubenswrapper[4727]: E1210 16:03:31.569287 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:03:31 crc kubenswrapper[4727]: E1210 16:03:31.569850 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:03:38 crc kubenswrapper[4727]: I1210 16:03:38.563263 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:03:38 crc kubenswrapper[4727]: E1210 16:03:38.564039 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:03:42 crc kubenswrapper[4727]: I1210 16:03:42.022988 4727 generic.go:334] "Generic (PLEG): container finished" podID="bbbb3151-1d33-4f85-b9cf-6e30882bb60e" containerID="7ff014e82b7909e228240e0b82054fc85f635fe65ee8d7a31dcfdf9e4ca8b945" exitCode=0 Dec 10 16:03:42 crc kubenswrapper[4727]: I1210 16:03:42.023060 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tlvqk/crc-debug-rblnq" event={"ID":"bbbb3151-1d33-4f85-b9cf-6e30882bb60e","Type":"ContainerDied","Data":"7ff014e82b7909e228240e0b82054fc85f635fe65ee8d7a31dcfdf9e4ca8b945"} Dec 10 16:03:42 crc kubenswrapper[4727]: E1210 16:03:42.066757 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbbb3151_1d33_4f85_b9cf_6e30882bb60e.slice/crio-7ff014e82b7909e228240e0b82054fc85f635fe65ee8d7a31dcfdf9e4ca8b945.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbbb3151_1d33_4f85_b9cf_6e30882bb60e.slice/crio-conmon-7ff014e82b7909e228240e0b82054fc85f635fe65ee8d7a31dcfdf9e4ca8b945.scope\": RecentStats: unable to find data in memory cache]" Dec 10 16:03:43 crc kubenswrapper[4727]: I1210 16:03:43.174043 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:03:43 crc kubenswrapper[4727]: I1210 16:03:43.217704 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tlvqk/crc-debug-rblnq"] Dec 10 16:03:43 crc kubenswrapper[4727]: I1210 16:03:43.232800 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tlvqk/crc-debug-rblnq"] Dec 10 16:03:43 crc kubenswrapper[4727]: I1210 16:03:43.349808 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-host\") pod \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\" (UID: \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\") " Dec 10 16:03:43 crc kubenswrapper[4727]: I1210 16:03:43.349891 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-host" (OuterVolumeSpecName: "host") pod "bbbb3151-1d33-4f85-b9cf-6e30882bb60e" (UID: "bbbb3151-1d33-4f85-b9cf-6e30882bb60e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:03:43 crc kubenswrapper[4727]: I1210 16:03:43.349941 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pssg5\" (UniqueName: \"kubernetes.io/projected/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-kube-api-access-pssg5\") pod \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\" (UID: \"bbbb3151-1d33-4f85-b9cf-6e30882bb60e\") " Dec 10 16:03:43 crc kubenswrapper[4727]: I1210 16:03:43.350536 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:03:43 crc kubenswrapper[4727]: I1210 16:03:43.363248 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-kube-api-access-pssg5" (OuterVolumeSpecName: "kube-api-access-pssg5") pod "bbbb3151-1d33-4f85-b9cf-6e30882bb60e" (UID: "bbbb3151-1d33-4f85-b9cf-6e30882bb60e"). InnerVolumeSpecName "kube-api-access-pssg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:03:43 crc kubenswrapper[4727]: I1210 16:03:43.452404 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pssg5\" (UniqueName: \"kubernetes.io/projected/bbbb3151-1d33-4f85-b9cf-6e30882bb60e-kube-api-access-pssg5\") on node \"crc\" DevicePath \"\"" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.046534 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d3c41e9d4ad0cd2924f20bc64c2d0e9338ab29355e946f2655058d5039dcea9" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.046694 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/crc-debug-rblnq" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.412221 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tlvqk/crc-debug-pwtpp"] Dec 10 16:03:44 crc kubenswrapper[4727]: E1210 16:03:44.412708 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbb3151-1d33-4f85-b9cf-6e30882bb60e" containerName="container-00" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.412723 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbb3151-1d33-4f85-b9cf-6e30882bb60e" containerName="container-00" Dec 10 16:03:44 crc kubenswrapper[4727]: E1210 16:03:44.412783 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08d6fc9-6282-40ef-9c63-99655ea0444c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.412794 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08d6fc9-6282-40ef-9c63-99655ea0444c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.413032 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbb3151-1d33-4f85-b9cf-6e30882bb60e" containerName="container-00" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.413049 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08d6fc9-6282-40ef-9c63-99655ea0444c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.413983 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.417031 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tlvqk"/"default-dockercfg-s8xfr" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.474136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-host\") pod \"crc-debug-pwtpp\" (UID: \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\") " pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.474318 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svjls\" (UniqueName: \"kubernetes.io/projected/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-kube-api-access-svjls\") pod \"crc-debug-pwtpp\" (UID: \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\") " pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.575871 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbbb3151-1d33-4f85-b9cf-6e30882bb60e" path="/var/lib/kubelet/pods/bbbb3151-1d33-4f85-b9cf-6e30882bb60e/volumes" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.577657 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svjls\" (UniqueName: \"kubernetes.io/projected/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-kube-api-access-svjls\") pod \"crc-debug-pwtpp\" (UID: \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\") " pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.578695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-host\") pod \"crc-debug-pwtpp\" (UID: \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\") " pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.578864 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-host\") pod \"crc-debug-pwtpp\" (UID: \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\") " pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.603959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svjls\" (UniqueName: \"kubernetes.io/projected/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-kube-api-access-svjls\") pod \"crc-debug-pwtpp\" (UID: \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\") " pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:44 crc kubenswrapper[4727]: I1210 16:03:44.732506 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:45 crc kubenswrapper[4727]: I1210 16:03:45.065717 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" event={"ID":"ede4f24f-cfa8-4dea-a982-b28a89b37d0d","Type":"ContainerStarted","Data":"3398c9bde04a4857a3addf61274e934571d76effdff21e71e2adc11951e5e9de"} Dec 10 16:03:45 crc kubenswrapper[4727]: E1210 16:03:45.566293 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:03:46 crc kubenswrapper[4727]: I1210 16:03:46.078318 4727 generic.go:334] "Generic (PLEG): container finished" podID="ede4f24f-cfa8-4dea-a982-b28a89b37d0d" containerID="6e71b1f418096686fa7cff8f88a8b7e7469327794f410dae41bc635fc4a46e57" exitCode=1 Dec 10 16:03:46 crc kubenswrapper[4727]: I1210 16:03:46.078386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" event={"ID":"ede4f24f-cfa8-4dea-a982-b28a89b37d0d","Type":"ContainerDied","Data":"6e71b1f418096686fa7cff8f88a8b7e7469327794f410dae41bc635fc4a46e57"} Dec 10 16:03:46 crc kubenswrapper[4727]: I1210 16:03:46.123274 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tlvqk/crc-debug-pwtpp"] Dec 10 16:03:46 crc kubenswrapper[4727]: I1210 16:03:46.132660 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tlvqk/crc-debug-pwtpp"] Dec 10 16:03:46 crc kubenswrapper[4727]: E1210 16:03:46.581278 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:03:47 crc kubenswrapper[4727]: I1210 16:03:47.312602 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:47 crc kubenswrapper[4727]: I1210 16:03:47.397336 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svjls\" (UniqueName: \"kubernetes.io/projected/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-kube-api-access-svjls\") pod \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\" (UID: \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\") " Dec 10 16:03:47 crc kubenswrapper[4727]: I1210 16:03:47.397521 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-host\") pod \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\" (UID: \"ede4f24f-cfa8-4dea-a982-b28a89b37d0d\") " Dec 10 16:03:47 crc kubenswrapper[4727]: I1210 16:03:47.397654 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-host" (OuterVolumeSpecName: "host") pod "ede4f24f-cfa8-4dea-a982-b28a89b37d0d" (UID: "ede4f24f-cfa8-4dea-a982-b28a89b37d0d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:03:47 crc kubenswrapper[4727]: I1210 16:03:47.399393 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:03:47 crc kubenswrapper[4727]: I1210 16:03:47.403145 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-kube-api-access-svjls" (OuterVolumeSpecName: "kube-api-access-svjls") pod "ede4f24f-cfa8-4dea-a982-b28a89b37d0d" (UID: "ede4f24f-cfa8-4dea-a982-b28a89b37d0d"). InnerVolumeSpecName "kube-api-access-svjls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:03:47 crc kubenswrapper[4727]: I1210 16:03:47.501621 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svjls\" (UniqueName: \"kubernetes.io/projected/ede4f24f-cfa8-4dea-a982-b28a89b37d0d-kube-api-access-svjls\") on node \"crc\" DevicePath \"\"" Dec 10 16:03:48 crc kubenswrapper[4727]: I1210 16:03:48.102347 4727 scope.go:117] "RemoveContainer" containerID="6e71b1f418096686fa7cff8f88a8b7e7469327794f410dae41bc635fc4a46e57" Dec 10 16:03:48 crc kubenswrapper[4727]: I1210 16:03:48.102385 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/crc-debug-pwtpp" Dec 10 16:03:48 crc kubenswrapper[4727]: I1210 16:03:48.577853 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede4f24f-cfa8-4dea-a982-b28a89b37d0d" path="/var/lib/kubelet/pods/ede4f24f-cfa8-4dea-a982-b28a89b37d0d/volumes" Dec 10 16:03:53 crc kubenswrapper[4727]: I1210 16:03:53.563941 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:03:53 crc kubenswrapper[4727]: E1210 16:03:53.564740 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:03:59 crc kubenswrapper[4727]: E1210 16:03:59.565457 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:04:00 crc kubenswrapper[4727]: E1210 16:04:00.566061 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:04:04 crc kubenswrapper[4727]: I1210 16:04:04.563744 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:04:04 crc kubenswrapper[4727]: E1210 16:04:04.564433 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.292794 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6rr6m"] Dec 10 16:04:12 crc kubenswrapper[4727]: E1210 16:04:12.293944 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede4f24f-cfa8-4dea-a982-b28a89b37d0d" containerName="container-00" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.293962 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede4f24f-cfa8-4dea-a982-b28a89b37d0d" containerName="container-00" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.294282 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede4f24f-cfa8-4dea-a982-b28a89b37d0d" containerName="container-00" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.296424 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.304977 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rr6m"] Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.444534 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-catalog-content\") pod \"redhat-marketplace-6rr6m\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.444629 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq4lg\" (UniqueName: \"kubernetes.io/projected/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-kube-api-access-pq4lg\") pod \"redhat-marketplace-6rr6m\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.444680 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-utilities\") pod \"redhat-marketplace-6rr6m\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.546745 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-utilities\") pod \"redhat-marketplace-6rr6m\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.547253 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-catalog-content\") pod \"redhat-marketplace-6rr6m\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.547350 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq4lg\" (UniqueName: \"kubernetes.io/projected/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-kube-api-access-pq4lg\") pod \"redhat-marketplace-6rr6m\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.548267 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-utilities\") pod \"redhat-marketplace-6rr6m\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.548453 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-catalog-content\") pod \"redhat-marketplace-6rr6m\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: E1210 16:04:12.565174 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.728825 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq4lg\" (UniqueName: \"kubernetes.io/projected/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-kube-api-access-pq4lg\") pod \"redhat-marketplace-6rr6m\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:12 crc kubenswrapper[4727]: I1210 16:04:12.932709 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:13 crc kubenswrapper[4727]: I1210 16:04:13.572057 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rr6m"] Dec 10 16:04:14 crc kubenswrapper[4727]: I1210 16:04:14.415638 4727 generic.go:334] "Generic (PLEG): container finished" podID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerID="b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128" exitCode=0 Dec 10 16:04:14 crc kubenswrapper[4727]: I1210 16:04:14.415704 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rr6m" event={"ID":"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a","Type":"ContainerDied","Data":"b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128"} Dec 10 16:04:14 crc kubenswrapper[4727]: I1210 16:04:14.415987 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rr6m" event={"ID":"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a","Type":"ContainerStarted","Data":"601a92e30097174298bf83e5c4d8d28ad66301e90524cf862c4b64a745326914"} Dec 10 16:04:14 crc kubenswrapper[4727]: I1210 16:04:14.417741 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:04:14 crc kubenswrapper[4727]: E1210 16:04:14.700721 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:04:14 crc kubenswrapper[4727]: E1210 16:04:14.700997 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:04:14 crc kubenswrapper[4727]: E1210 16:04:14.701152 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:04:14 crc kubenswrapper[4727]: E1210 16:04:14.702323 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:04:15 crc kubenswrapper[4727]: I1210 16:04:15.563628 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:04:15 crc kubenswrapper[4727]: E1210 16:04:15.563890 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:04:16 crc kubenswrapper[4727]: I1210 16:04:16.441643 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rr6m" event={"ID":"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a","Type":"ContainerStarted","Data":"5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1"} Dec 10 16:04:17 crc kubenswrapper[4727]: I1210 16:04:17.463461 4727 generic.go:334] "Generic (PLEG): container finished" podID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerID="5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1" exitCode=0 Dec 10 16:04:17 crc kubenswrapper[4727]: I1210 16:04:17.463556 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rr6m" event={"ID":"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a","Type":"ContainerDied","Data":"5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1"} Dec 10 16:04:18 crc kubenswrapper[4727]: I1210 16:04:18.474023 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rr6m" event={"ID":"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a","Type":"ContainerStarted","Data":"fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044"} Dec 10 16:04:18 crc kubenswrapper[4727]: I1210 16:04:18.504404 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6rr6m" podStartSLOduration=2.841241891 podStartE2EDuration="6.504385219s" podCreationTimestamp="2025-12-10 16:04:12 +0000 UTC" firstStartedPulling="2025-12-10 16:04:14.417506671 +0000 UTC m=+5558.612281213" lastFinishedPulling="2025-12-10 16:04:18.080650009 +0000 UTC m=+5562.275424541" observedRunningTime="2025-12-10 16:04:18.495371012 +0000 UTC m=+5562.690145544" watchObservedRunningTime="2025-12-10 16:04:18.504385219 +0000 UTC m=+5562.699159761" Dec 10 16:04:22 crc kubenswrapper[4727]: I1210 16:04:22.934214 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:22 crc kubenswrapper[4727]: I1210 16:04:22.934778 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:23 crc kubenswrapper[4727]: I1210 16:04:23.293294 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:23 crc kubenswrapper[4727]: I1210 16:04:23.568187 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:25 crc kubenswrapper[4727]: E1210 16:04:25.565542 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:04:26 crc kubenswrapper[4727]: I1210 16:04:26.885420 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rr6m"] Dec 10 16:04:26 crc kubenswrapper[4727]: I1210 16:04:26.886613 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6rr6m" podUID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerName="registry-server" containerID="cri-o://fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044" gracePeriod=2 Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.436066 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.532420 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq4lg\" (UniqueName: \"kubernetes.io/projected/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-kube-api-access-pq4lg\") pod \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.532586 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-catalog-content\") pod \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.532729 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-utilities\") pod \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\" (UID: \"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a\") " Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.533822 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-utilities" (OuterVolumeSpecName: "utilities") pod "c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" (UID: "c9e1ddd4-f48d-45c7-a565-9d1dad1b987a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.546941 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-kube-api-access-pq4lg" (OuterVolumeSpecName: "kube-api-access-pq4lg") pod "c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" (UID: "c9e1ddd4-f48d-45c7-a565-9d1dad1b987a"). InnerVolumeSpecName "kube-api-access-pq4lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.556495 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" (UID: "c9e1ddd4-f48d-45c7-a565-9d1dad1b987a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:04:27 crc kubenswrapper[4727]: E1210 16:04:27.565731 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.580068 4727 generic.go:334] "Generic (PLEG): container finished" podID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerID="fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044" exitCode=0 Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.580112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rr6m" event={"ID":"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a","Type":"ContainerDied","Data":"fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044"} Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.580141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rr6m" event={"ID":"c9e1ddd4-f48d-45c7-a565-9d1dad1b987a","Type":"ContainerDied","Data":"601a92e30097174298bf83e5c4d8d28ad66301e90524cf862c4b64a745326914"} Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.580159 4727 scope.go:117] "RemoveContainer" containerID="fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.580850 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rr6m" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.627783 4727 scope.go:117] "RemoveContainer" containerID="5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.632781 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rr6m"] Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.636308 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.636354 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.636369 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq4lg\" (UniqueName: \"kubernetes.io/projected/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a-kube-api-access-pq4lg\") on node \"crc\" DevicePath \"\"" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.645977 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rr6m"] Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.649346 4727 scope.go:117] "RemoveContainer" containerID="b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.702500 4727 scope.go:117] "RemoveContainer" containerID="fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044" Dec 10 16:04:27 crc kubenswrapper[4727]: E1210 16:04:27.703012 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044\": container with ID starting with fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044 not found: ID does not exist" containerID="fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.703068 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044"} err="failed to get container status \"fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044\": rpc error: code = NotFound desc = could not find container \"fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044\": container with ID starting with fa81eb4ff073626cdc82d6db4aa14fb1e8195f12d2dffaff3ed96df62d42d044 not found: ID does not exist" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.703099 4727 scope.go:117] "RemoveContainer" containerID="5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1" Dec 10 16:04:27 crc kubenswrapper[4727]: E1210 16:04:27.703499 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1\": container with ID starting with 5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1 not found: ID does not exist" containerID="5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.703543 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1"} err="failed to get container status \"5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1\": rpc error: code = NotFound desc = could not find container \"5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1\": container with ID starting with 5b4aeca40138247c02c9d86161f00032a0207e81eacbe391999f951b4b473fb1 not found: ID does not exist" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.703574 4727 scope.go:117] "RemoveContainer" containerID="b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128" Dec 10 16:04:27 crc kubenswrapper[4727]: E1210 16:04:27.703867 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128\": container with ID starting with b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128 not found: ID does not exist" containerID="b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128" Dec 10 16:04:27 crc kubenswrapper[4727]: I1210 16:04:27.703914 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128"} err="failed to get container status \"b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128\": rpc error: code = NotFound desc = could not find container \"b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128\": container with ID starting with b936f632ade4e115db0e7a809fede02fffd10f35524ce0c75c3ef927427d8128 not found: ID does not exist" Dec 10 16:04:28 crc kubenswrapper[4727]: I1210 16:04:28.575687 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" path="/var/lib/kubelet/pods/c9e1ddd4-f48d-45c7-a565-9d1dad1b987a/volumes" Dec 10 16:04:30 crc kubenswrapper[4727]: I1210 16:04:30.568179 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:04:30 crc kubenswrapper[4727]: E1210 16:04:30.568730 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:04:36 crc kubenswrapper[4727]: I1210 16:04:36.039864 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_00ee5804-d85a-432e-9295-b018259dcf38/init-config-reloader/0.log" Dec 10 16:04:36 crc kubenswrapper[4727]: I1210 16:04:36.254792 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_00ee5804-d85a-432e-9295-b018259dcf38/init-config-reloader/0.log" Dec 10 16:04:36 crc kubenswrapper[4727]: I1210 16:04:36.320320 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_00ee5804-d85a-432e-9295-b018259dcf38/config-reloader/0.log" Dec 10 16:04:36 crc kubenswrapper[4727]: I1210 16:04:36.359675 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_00ee5804-d85a-432e-9295-b018259dcf38/alertmanager/0.log" Dec 10 16:04:36 crc kubenswrapper[4727]: I1210 16:04:36.782914 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f6784f8dd-pcqm7_adb9291c-b698-4aa7-b4c9-579f80d0183b/barbican-api/0.log" Dec 10 16:04:36 crc kubenswrapper[4727]: I1210 16:04:36.902445 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f6784f8dd-pcqm7_adb9291c-b698-4aa7-b4c9-579f80d0183b/barbican-api-log/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.034746 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-cf8d46dd8-mljgg_a20e7837-8316-4dd1-91d5-60d2ea604213/barbican-keystone-listener-log/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.047712 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-cf8d46dd8-mljgg_a20e7837-8316-4dd1-91d5-60d2ea604213/barbican-keystone-listener/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.188426 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c6cd97dcf-twztd_573bf6bf-ba45-4eaf-a331-e11c2696a1ab/barbican-worker/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.261097 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c6cd97dcf-twztd_573bf6bf-ba45-4eaf-a331-e11c2696a1ab/barbican-worker-log/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.397574 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4zfll_bc19da82-3ee5-4fe0-b6f6-fd657ecc95b2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.568589 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_727601cd-934c-4d0d-b32e-c66a80adbb9f/ceilometer-notification-agent/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.637426 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_727601cd-934c-4d0d-b32e-c66a80adbb9f/proxy-httpd/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.757485 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_727601cd-934c-4d0d-b32e-c66a80adbb9f/sg-core/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.858248 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054/cinder-api/0.log" Dec 10 16:04:37 crc kubenswrapper[4727]: I1210 16:04:37.900092 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1a8b86d8-d172-48a1-9dfa-0bcdd7cf7054/cinder-api-log/0.log" Dec 10 16:04:38 crc kubenswrapper[4727]: I1210 16:04:38.093434 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a/cinder-scheduler/0.log" Dec 10 16:04:38 crc kubenswrapper[4727]: I1210 16:04:38.175360 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_51f9a1aa-f606-4fd7-b1ec-ad3ab74af66a/probe/0.log" Dec 10 16:04:38 crc kubenswrapper[4727]: I1210 16:04:38.324710 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_4439386b-d2b2-4ac3-a0e5-07623192084c/cloudkitty-api-log/0.log" Dec 10 16:04:38 crc kubenswrapper[4727]: I1210 16:04:38.388238 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_4439386b-d2b2-4ac3-a0e5-07623192084c/cloudkitty-api/0.log" Dec 10 16:04:38 crc kubenswrapper[4727]: I1210 16:04:38.622797 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_c11b517c-638f-4542-8a0f-c05cab3a8f7c/loki-compactor/0.log" Dec 10 16:04:38 crc kubenswrapper[4727]: I1210 16:04:38.837323 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-h865p_d6dcffc0-6984-4b52-b5aa-e6703e78a9c0/gateway/0.log" Dec 10 16:04:38 crc kubenswrapper[4727]: I1210 16:04:38.865784 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-66dfd9bb-q5hvs_b2a7f948-3565-42cf-81ed-c16f1b770019/loki-distributor/0.log" Dec 10 16:04:39 crc kubenswrapper[4727]: I1210 16:04:39.012673 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-jj92k_350e96e2-3fa8-4673-a53e-f925e5922be3/gateway/0.log" Dec 10 16:04:39 crc kubenswrapper[4727]: I1210 16:04:39.106303 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_ddf8ff74-065e-4883-87c7-2bf30fccd234/loki-index-gateway/0.log" Dec 10 16:04:39 crc kubenswrapper[4727]: I1210 16:04:39.375979 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_bbecc1f3-56d2-4852-8f9d-4397943c2b8b/loki-ingester/0.log" Dec 10 16:04:39 crc kubenswrapper[4727]: E1210 16:04:39.565631 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:04:39 crc kubenswrapper[4727]: I1210 16:04:39.619198 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-795fd8f8cc-tbx4t_0f171f74-b9e1-42a3-9907-4d34dabee6c2/loki-querier/0.log" Dec 10 16:04:39 crc kubenswrapper[4727]: I1210 16:04:39.843204 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-5cd44666df-z64ld_680c351c-7945-4752-83ce-8cdd827cf4e5/loki-query-frontend/0.log" Dec 10 16:04:40 crc kubenswrapper[4727]: I1210 16:04:40.167803 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-q82js_70678c11-c77a-4d32-a1aa-9c4c43140b2f/init/0.log" Dec 10 16:04:40 crc kubenswrapper[4727]: I1210 16:04:40.322941 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-q82js_70678c11-c77a-4d32-a1aa-9c4c43140b2f/init/0.log" Dec 10 16:04:40 crc kubenswrapper[4727]: I1210 16:04:40.496609 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-q82js_70678c11-c77a-4d32-a1aa-9c4c43140b2f/dnsmasq-dns/0.log" Dec 10 16:04:40 crc kubenswrapper[4727]: I1210 16:04:40.605970 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-49cgn_9cf54f02-df0d-4ef1-a946-b9a1bc279ac9/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:40 crc kubenswrapper[4727]: E1210 16:04:40.702004 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:04:40 crc kubenswrapper[4727]: E1210 16:04:40.702068 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:04:40 crc kubenswrapper[4727]: E1210 16:04:40.702217 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:04:40 crc kubenswrapper[4727]: E1210 16:04:40.703327 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:04:40 crc kubenswrapper[4727]: I1210 16:04:40.919460 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-5tlh5_571070d9-6ff3-4477-b6dd-567afc8be7e1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:41 crc kubenswrapper[4727]: I1210 16:04:41.188349 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-b46zs_791c3204-0b3f-4004-8871-8af969076bc2/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:41 crc kubenswrapper[4727]: I1210 16:04:41.205939 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-b76mr_08f2ae25-5b39-416e-9f25-830650cc91d0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:41 crc kubenswrapper[4727]: I1210 16:04:41.498645 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-s98mp_a08d6fc9-6282-40ef-9c63-99655ea0444c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:41 crc kubenswrapper[4727]: I1210 16:04:41.546231 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sqtj9_491442b9-eec3-46e4-9b19-998f5fcd72af/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:41 crc kubenswrapper[4727]: I1210 16:04:41.563650 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:04:41 crc kubenswrapper[4727]: E1210 16:04:41.564052 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:04:41 crc kubenswrapper[4727]: I1210 16:04:41.665657 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-z594q_7b7cce52-493a-4a84-a51a-768d6d40d69d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:41 crc kubenswrapper[4727]: I1210 16:04:41.822612 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e/glance-log/0.log" Dec 10 16:04:41 crc kubenswrapper[4727]: I1210 16:04:41.832526 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8002b3e7-2bf2-45ac-a5b3-0e423b5f7f9e/glance-httpd/0.log" Dec 10 16:04:42 crc kubenswrapper[4727]: I1210 16:04:42.077175 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_027b0dec-e154-4f38-be80-ae169e00c6a4/glance-httpd/0.log" Dec 10 16:04:42 crc kubenswrapper[4727]: I1210 16:04:42.095699 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_027b0dec-e154-4f38-be80-ae169e00c6a4/glance-log/0.log" Dec 10 16:04:42 crc kubenswrapper[4727]: I1210 16:04:42.340936 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29422981-hs2nn_5686911a-63ad-487e-8dc4-c9c833c20f51/keystone-cron/0.log" Dec 10 16:04:42 crc kubenswrapper[4727]: I1210 16:04:42.496465 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-584578f496-pdfh5_e730f65d-c13d-4603-8db0-ed64afa9584a/keystone-api/0.log" Dec 10 16:04:42 crc kubenswrapper[4727]: I1210 16:04:42.874881 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29423041-9xj8q_a614b5ac-cc30-4106-882b-7089e9fba8b7/keystone-cron/0.log" Dec 10 16:04:42 crc kubenswrapper[4727]: I1210 16:04:42.914879 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_84ba56c7-2390-4e8f-a47b-690a94da6c20/kube-state-metrics/0.log" Dec 10 16:04:43 crc kubenswrapper[4727]: I1210 16:04:43.287094 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78f5bcc65c-vgwfj_b4032a4d-aa42-4515-a09c-6647e6d0b7d5/neutron-api/0.log" Dec 10 16:04:43 crc kubenswrapper[4727]: I1210 16:04:43.371421 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78f5bcc65c-vgwfj_b4032a4d-aa42-4515-a09c-6647e6d0b7d5/neutron-httpd/0.log" Dec 10 16:04:43 crc kubenswrapper[4727]: I1210 16:04:43.768179 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4a402d8e-da25-43fd-a5c8-bb0588d544ab/nova-api-log/0.log" Dec 10 16:04:44 crc kubenswrapper[4727]: I1210 16:04:44.137094 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bc159773-04ff-4e53-8ab1-7a292491299e/nova-cell0-conductor-conductor/0.log" Dec 10 16:04:44 crc kubenswrapper[4727]: I1210 16:04:44.222399 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4a402d8e-da25-43fd-a5c8-bb0588d544ab/nova-api-api/0.log" Dec 10 16:04:44 crc kubenswrapper[4727]: I1210 16:04:44.304164 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_b867ef72-dc0e-475c-9368-ad959ef5c131/cloudkitty-proc/0.log" Dec 10 16:04:44 crc kubenswrapper[4727]: I1210 16:04:44.367618 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d5b933c0-cf5d-491a-99f2-901be53e7950/nova-cell1-conductor-conductor/0.log" Dec 10 16:04:44 crc kubenswrapper[4727]: I1210 16:04:44.592877 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fd28088f-9229-4753-a618-7b948ab06773/nova-cell1-novncproxy-novncproxy/0.log" Dec 10 16:04:44 crc kubenswrapper[4727]: I1210 16:04:44.673708 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_86934207-9ef6-488d-8f95-4ab3ad0c5fc7/nova-metadata-log/0.log" Dec 10 16:04:45 crc kubenswrapper[4727]: I1210 16:04:45.007042 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4f496d39-b701-4789-aaff-d7bf1602a945/nova-scheduler-scheduler/0.log" Dec 10 16:04:45 crc kubenswrapper[4727]: I1210 16:04:45.130324 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e1e2f39a-c206-4375-bea1-db945f0b3003/mysql-bootstrap/0.log" Dec 10 16:04:45 crc kubenswrapper[4727]: I1210 16:04:45.321673 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e1e2f39a-c206-4375-bea1-db945f0b3003/mysql-bootstrap/0.log" Dec 10 16:04:45 crc kubenswrapper[4727]: I1210 16:04:45.370858 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e1e2f39a-c206-4375-bea1-db945f0b3003/galera/0.log" Dec 10 16:04:45 crc kubenswrapper[4727]: I1210 16:04:45.550226 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f0273aa7-a359-4d67-9c86-7920c5d69e11/mysql-bootstrap/0.log" Dec 10 16:04:45 crc kubenswrapper[4727]: I1210 16:04:45.848106 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f0273aa7-a359-4d67-9c86-7920c5d69e11/galera/0.log" Dec 10 16:04:45 crc kubenswrapper[4727]: I1210 16:04:45.876013 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f0273aa7-a359-4d67-9c86-7920c5d69e11/mysql-bootstrap/0.log" Dec 10 16:04:46 crc kubenswrapper[4727]: I1210 16:04:46.095959 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d3fd12da-d7cc-49bc-b30b-346a7dd11f92/openstackclient/0.log" Dec 10 16:04:46 crc kubenswrapper[4727]: I1210 16:04:46.227879 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vdzvd_afafee8a-246f-4de6-90c8-efb386092985/openstack-network-exporter/0.log" Dec 10 16:04:46 crc kubenswrapper[4727]: I1210 16:04:46.387587 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4tq5b_fb42bc94-d07f-4121-8591-8f868d089a2a/ovsdb-server-init/0.log" Dec 10 16:04:46 crc kubenswrapper[4727]: I1210 16:04:46.597515 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4tq5b_fb42bc94-d07f-4121-8591-8f868d089a2a/ovs-vswitchd/0.log" Dec 10 16:04:46 crc kubenswrapper[4727]: I1210 16:04:46.603530 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4tq5b_fb42bc94-d07f-4121-8591-8f868d089a2a/ovsdb-server-init/0.log" Dec 10 16:04:46 crc kubenswrapper[4727]: I1210 16:04:46.667403 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4tq5b_fb42bc94-d07f-4121-8591-8f868d089a2a/ovsdb-server/0.log" Dec 10 16:04:46 crc kubenswrapper[4727]: I1210 16:04:46.684427 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_86934207-9ef6-488d-8f95-4ab3ad0c5fc7/nova-metadata-metadata/0.log" Dec 10 16:04:46 crc kubenswrapper[4727]: I1210 16:04:46.845946 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x2fq6_69a1889a-3ba4-463e-bd9a-4f417ca69280/ovn-controller/0.log" Dec 10 16:04:46 crc kubenswrapper[4727]: I1210 16:04:46.982308 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ec242832-714a-4cb7-9bdc-c88b5336c201/openstack-network-exporter/0.log" Dec 10 16:04:47 crc kubenswrapper[4727]: I1210 16:04:47.106288 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ec242832-714a-4cb7-9bdc-c88b5336c201/ovn-northd/0.log" Dec 10 16:04:47 crc kubenswrapper[4727]: I1210 16:04:47.203443 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3a122a79-dced-4afa-bb4c-4c6cd806770a/openstack-network-exporter/0.log" Dec 10 16:04:47 crc kubenswrapper[4727]: I1210 16:04:47.226875 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3a122a79-dced-4afa-bb4c-4c6cd806770a/ovsdbserver-nb/0.log" Dec 10 16:04:47 crc kubenswrapper[4727]: I1210 16:04:47.375510 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7b92e7c2-a91e-4b8d-9316-ea7fc4e90188/openstack-network-exporter/0.log" Dec 10 16:04:47 crc kubenswrapper[4727]: I1210 16:04:47.473741 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7b92e7c2-a91e-4b8d-9316-ea7fc4e90188/ovsdbserver-sb/0.log" Dec 10 16:04:47 crc kubenswrapper[4727]: I1210 16:04:47.662192 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f9b858948-mx57t_f0297dbf-60f7-48b2-b36d-3ed437da1944/placement-api/0.log" Dec 10 16:04:47 crc kubenswrapper[4727]: I1210 16:04:47.747887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f9b858948-mx57t_f0297dbf-60f7-48b2-b36d-3ed437da1944/placement-log/0.log" Dec 10 16:04:47 crc kubenswrapper[4727]: I1210 16:04:47.774123 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d61f7e3d-578d-4429-ad9c-31ba3e8be091/init-config-reloader/0.log" Dec 10 16:04:47 crc kubenswrapper[4727]: I1210 16:04:47.997169 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d61f7e3d-578d-4429-ad9c-31ba3e8be091/init-config-reloader/0.log" Dec 10 16:04:48 crc kubenswrapper[4727]: I1210 16:04:48.043519 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d61f7e3d-578d-4429-ad9c-31ba3e8be091/config-reloader/0.log" Dec 10 16:04:48 crc kubenswrapper[4727]: I1210 16:04:48.090417 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d61f7e3d-578d-4429-ad9c-31ba3e8be091/thanos-sidecar/0.log" Dec 10 16:04:48 crc kubenswrapper[4727]: I1210 16:04:48.097775 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d61f7e3d-578d-4429-ad9c-31ba3e8be091/prometheus/0.log" Dec 10 16:04:48 crc kubenswrapper[4727]: I1210 16:04:48.272268 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6bb6d576-35c9-4ce9-9fa0-6cef4f513739/setup-container/0.log" Dec 10 16:04:49 crc kubenswrapper[4727]: I1210 16:04:49.226557 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6bb6d576-35c9-4ce9-9fa0-6cef4f513739/rabbitmq/0.log" Dec 10 16:04:49 crc kubenswrapper[4727]: I1210 16:04:49.286750 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6bb6d576-35c9-4ce9-9fa0-6cef4f513739/setup-container/0.log" Dec 10 16:04:49 crc kubenswrapper[4727]: I1210 16:04:49.297006 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c/setup-container/0.log" Dec 10 16:04:49 crc kubenswrapper[4727]: I1210 16:04:49.543636 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-x49j7_b2b9c360-f933-4326-8b6f-5c1d869577c9/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:49 crc kubenswrapper[4727]: I1210 16:04:49.548835 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c/setup-container/0.log" Dec 10 16:04:49 crc kubenswrapper[4727]: I1210 16:04:49.555000 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8ea795c9-5aba-42f8-bcc4-a285c3fa4a6c/rabbitmq/0.log" Dec 10 16:04:49 crc kubenswrapper[4727]: I1210 16:04:49.733832 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-nrl9k_49639fa2-d7d8-427b-ac47-9221c7fd68c3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:04:49 crc kubenswrapper[4727]: I1210 16:04:49.976791 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f4548974c-shwfg_4dd2b5d5-3831-4578-a676-5338dd451099/proxy-server/0.log" Dec 10 16:04:49 crc kubenswrapper[4727]: I1210 16:04:49.992673 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f4548974c-shwfg_4dd2b5d5-3831-4578-a676-5338dd451099/proxy-httpd/0.log" Dec 10 16:04:50 crc kubenswrapper[4727]: I1210 16:04:50.073994 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-r4x4n_56fa8f94-70a1-47ce-85c5-947e889ba79c/swift-ring-rebalance/0.log" Dec 10 16:04:50 crc kubenswrapper[4727]: I1210 16:04:50.240021 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/account-auditor/0.log" Dec 10 16:04:50 crc kubenswrapper[4727]: I1210 16:04:50.283008 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/account-reaper/0.log" Dec 10 16:04:50 crc kubenswrapper[4727]: I1210 16:04:50.364059 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/account-replicator/0.log" Dec 10 16:04:50 crc kubenswrapper[4727]: I1210 16:04:50.916810 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/account-server/0.log" Dec 10 16:04:50 crc kubenswrapper[4727]: I1210 16:04:50.927266 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/container-auditor/0.log" Dec 10 16:04:50 crc kubenswrapper[4727]: I1210 16:04:50.927894 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/container-server/0.log" Dec 10 16:04:51 crc kubenswrapper[4727]: I1210 16:04:51.031383 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/container-replicator/0.log" Dec 10 16:04:51 crc kubenswrapper[4727]: I1210 16:04:51.094629 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/container-updater/0.log" Dec 10 16:04:51 crc kubenswrapper[4727]: I1210 16:04:51.165500 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/object-expirer/0.log" Dec 10 16:04:51 crc kubenswrapper[4727]: I1210 16:04:51.180549 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/object-auditor/0.log" Dec 10 16:04:51 crc kubenswrapper[4727]: I1210 16:04:51.297925 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/object-replicator/0.log" Dec 10 16:04:51 crc kubenswrapper[4727]: I1210 16:04:51.308788 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/object-server/0.log" Dec 10 16:04:51 crc kubenswrapper[4727]: I1210 16:04:51.461691 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/rsync/0.log" Dec 10 16:04:51 crc kubenswrapper[4727]: I1210 16:04:51.464567 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/object-updater/0.log" Dec 10 16:04:51 crc kubenswrapper[4727]: I1210 16:04:51.559833 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2b2c88bb-9134-46aa-8595-4762fca3fb57/swift-recon-cron/0.log" Dec 10 16:04:52 crc kubenswrapper[4727]: E1210 16:04:52.564743 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:04:52 crc kubenswrapper[4727]: E1210 16:04:52.565273 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:04:56 crc kubenswrapper[4727]: I1210 16:04:56.573292 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:04:56 crc kubenswrapper[4727]: E1210 16:04:56.574074 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:04:57 crc kubenswrapper[4727]: I1210 16:04:57.526999 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6e710c7e-8c31-487b-ade5-a403f619e489/memcached/0.log" Dec 10 16:05:05 crc kubenswrapper[4727]: E1210 16:05:05.565540 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:05:06 crc kubenswrapper[4727]: E1210 16:05:06.570896 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:05:08 crc kubenswrapper[4727]: I1210 16:05:08.563297 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:05:08 crc kubenswrapper[4727]: E1210 16:05:08.563936 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:05:17 crc kubenswrapper[4727]: E1210 16:05:17.566499 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.263893 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-dwczj_14d03018-c372-4ede-bb5d-47efd53f4d51/kube-rbac-proxy/0.log" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.320851 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-dwczj_14d03018-c372-4ede-bb5d-47efd53f4d51/manager/0.log" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.477669 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-tvzpk_42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9/manager/0.log" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.505959 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-tvzpk_42310864-a5cb-44fd-b5eb-c5bf1f9c8ce9/kube-rbac-proxy/0.log" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.675514 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-drdt4_472840c5-9b95-4303-911c-7b27232236ea/kube-rbac-proxy/0.log" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.677417 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-drdt4_472840c5-9b95-4303-911c-7b27232236ea/manager/0.log" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.798447 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz_7037a391-6332-4903-9b2e-7910e334ae5d/util/0.log" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.948583 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz_7037a391-6332-4903-9b2e-7910e334ae5d/pull/0.log" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.958381 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz_7037a391-6332-4903-9b2e-7910e334ae5d/util/0.log" Dec 10 16:05:19 crc kubenswrapper[4727]: I1210 16:05:19.998761 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz_7037a391-6332-4903-9b2e-7910e334ae5d/pull/0.log" Dec 10 16:05:20 crc kubenswrapper[4727]: I1210 16:05:20.181015 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz_7037a391-6332-4903-9b2e-7910e334ae5d/util/0.log" Dec 10 16:05:20 crc kubenswrapper[4727]: I1210 16:05:20.187094 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz_7037a391-6332-4903-9b2e-7910e334ae5d/pull/0.log" Dec 10 16:05:20 crc kubenswrapper[4727]: I1210 16:05:20.187230 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eae46674df07961463807dff100e34042f708e1754292886c53dfb22f5nj9jz_7037a391-6332-4903-9b2e-7910e334ae5d/extract/0.log" Dec 10 16:05:20 crc kubenswrapper[4727]: I1210 16:05:20.398498 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-tmlwp_862ae668-98ff-4531-9eca-6309953e1333/kube-rbac-proxy/0.log" Dec 10 16:05:20 crc kubenswrapper[4727]: I1210 16:05:20.440471 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-tmlwp_862ae668-98ff-4531-9eca-6309953e1333/manager/0.log" Dec 10 16:05:20 crc kubenswrapper[4727]: I1210 16:05:20.480246 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-q9gqt_4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5/kube-rbac-proxy/0.log" Dec 10 16:05:20 crc kubenswrapper[4727]: E1210 16:05:20.565312 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.122989 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-7vs7p_ac673936-2054-4418-bb79-5aad0e79b264/manager/0.log" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.137148 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-7vs7p_ac673936-2054-4418-bb79-5aad0e79b264/kube-rbac-proxy/0.log" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.152559 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-q9gqt_4728dafd-bcaa-4b10-b2f5-9884b3d3a2b5/manager/0.log" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.326612 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-2jdhx_ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e/kube-rbac-proxy/0.log" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.563432 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:05:21 crc kubenswrapper[4727]: E1210 16:05:21.563714 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.581517 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-2jdhx_ad8d8c3a-b26b-47d4-a4d1-1fd318a5bf4e/manager/0.log" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.598900 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-nbjpv_be805dee-d0fa-4358-b461-bfd98c87bcaa/manager/0.log" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.634186 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-nbjpv_be805dee-d0fa-4358-b461-bfd98c87bcaa/kube-rbac-proxy/0.log" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.781131 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-szdjh_db7bd96a-8494-4041-9277-93705f23849d/kube-rbac-proxy/0.log" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.835187 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-5rmxn_da47343a-d98d-4f70-bc94-ae74257914e2/kube-rbac-proxy/0.log" Dec 10 16:05:21 crc kubenswrapper[4727]: I1210 16:05:21.846534 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-szdjh_db7bd96a-8494-4041-9277-93705f23849d/manager/0.log" Dec 10 16:05:22 crc kubenswrapper[4727]: I1210 16:05:22.022052 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-5rmxn_da47343a-d98d-4f70-bc94-ae74257914e2/manager/0.log" Dec 10 16:05:22 crc kubenswrapper[4727]: I1210 16:05:22.085589 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-5s987_713c2d47-7281-46d4-bbcd-16fba5161b5a/manager/0.log" Dec 10 16:05:22 crc kubenswrapper[4727]: I1210 16:05:22.113267 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-5s987_713c2d47-7281-46d4-bbcd-16fba5161b5a/kube-rbac-proxy/0.log" Dec 10 16:05:22 crc kubenswrapper[4727]: I1210 16:05:22.299516 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-hqsbc_34a91f52-98f9-4ada-b0d1-54bba42c1035/kube-rbac-proxy/0.log" Dec 10 16:05:22 crc kubenswrapper[4727]: I1210 16:05:22.342078 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-hqsbc_34a91f52-98f9-4ada-b0d1-54bba42c1035/manager/0.log" Dec 10 16:05:22 crc kubenswrapper[4727]: I1210 16:05:22.451818 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9zpx4_9cf31fa3-bbae-4fcd-9d8a-c11a6b291642/kube-rbac-proxy/0.log" Dec 10 16:05:22 crc kubenswrapper[4727]: I1210 16:05:22.551220 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9zpx4_9cf31fa3-bbae-4fcd-9d8a-c11a6b291642/manager/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.051661 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-m2c89_365ce9e6-678d-4036-9971-3c82e553fa22/kube-rbac-proxy/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.067443 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-m2c89_365ce9e6-678d-4036-9971-3c82e553fa22/manager/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.261885 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fh8tfl_492a4765-2161-48a4-a37b-2a11c919ebcf/kube-rbac-proxy/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.263026 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fh8tfl_492a4765-2161-48a4-a37b-2a11c919ebcf/manager/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.493475 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ss25w_64b105d9-469b-4db7-9358-60e9ed040aee/registry-server/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.663057 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-54566bf895-5d2db_b6f32208-e433-43a8-9cc3-1a0b642db859/operator/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.713708 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jx2r4_f2964ecf-16a8-4906-a0ee-b2823ec9e9fd/kube-rbac-proxy/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.870042 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jx2r4_f2964ecf-16a8-4906-a0ee-b2823ec9e9fd/manager/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.940945 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-b2njb_a1fce3d2-8cd4-4c57-95a4-57f04ff403a7/kube-rbac-proxy/0.log" Dec 10 16:05:23 crc kubenswrapper[4727]: I1210 16:05:23.975201 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-b2njb_a1fce3d2-8cd4-4c57-95a4-57f04ff403a7/manager/0.log" Dec 10 16:05:24 crc kubenswrapper[4727]: I1210 16:05:24.223784 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2mccl_26bb64b8-f74e-41f5-9bca-ed451468d0ab/operator/0.log" Dec 10 16:05:24 crc kubenswrapper[4727]: I1210 16:05:24.260312 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-fzvpw_65cd268a-1c59-4115-8732-085b07c41edf/kube-rbac-proxy/0.log" Dec 10 16:05:24 crc kubenswrapper[4727]: I1210 16:05:24.435466 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5f9b77867b-r8l6w_1149bbe0-dcd9-430b-b37a-fb145387df5f/manager/0.log" Dec 10 16:05:24 crc kubenswrapper[4727]: I1210 16:05:24.459009 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5bbb8fffcc-sml6p_4bb1a719-bf76-4753-9bda-e5b2a71b2f96/kube-rbac-proxy/0.log" Dec 10 16:05:24 crc kubenswrapper[4727]: I1210 16:05:24.510979 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-fzvpw_65cd268a-1c59-4115-8732-085b07c41edf/manager/0.log" Dec 10 16:05:24 crc kubenswrapper[4727]: I1210 16:05:24.718087 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-dbvwg_adc7591e-00e9-4ed5-9d6b-729a283cf25d/kube-rbac-proxy/0.log" Dec 10 16:05:24 crc kubenswrapper[4727]: I1210 16:05:24.746062 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-dbvwg_adc7591e-00e9-4ed5-9d6b-729a283cf25d/manager/0.log" Dec 10 16:05:24 crc kubenswrapper[4727]: I1210 16:05:24.882493 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-qllqx_eae9110d-9b14-4360-9e66-60bf84efae12/kube-rbac-proxy/0.log" Dec 10 16:05:25 crc kubenswrapper[4727]: I1210 16:05:25.051782 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-qllqx_eae9110d-9b14-4360-9e66-60bf84efae12/manager/0.log" Dec 10 16:05:25 crc kubenswrapper[4727]: I1210 16:05:25.166756 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5bbb8fffcc-sml6p_4bb1a719-bf76-4753-9bda-e5b2a71b2f96/manager/0.log" Dec 10 16:05:30 crc kubenswrapper[4727]: E1210 16:05:30.565604 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:05:31 crc kubenswrapper[4727]: E1210 16:05:31.566671 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:05:33 crc kubenswrapper[4727]: I1210 16:05:33.563297 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:05:33 crc kubenswrapper[4727]: E1210 16:05:33.563970 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:05:41 crc kubenswrapper[4727]: E1210 16:05:41.565842 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:05:43 crc kubenswrapper[4727]: E1210 16:05:43.566352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:05:45 crc kubenswrapper[4727]: I1210 16:05:45.612269 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s4p8m_6c7b70a8-7b74-4562-bc8b-bd5be42a8222/control-plane-machine-set-operator/0.log" Dec 10 16:05:45 crc kubenswrapper[4727]: I1210 16:05:45.839207 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sjrxq_ec2ea8fb-1885-4b49-8bd2-ee4a63586ade/kube-rbac-proxy/0.log" Dec 10 16:05:45 crc kubenswrapper[4727]: I1210 16:05:45.858308 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sjrxq_ec2ea8fb-1885-4b49-8bd2-ee4a63586ade/machine-api-operator/0.log" Dec 10 16:05:48 crc kubenswrapper[4727]: I1210 16:05:48.563650 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:05:49 crc kubenswrapper[4727]: I1210 16:05:49.566881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"2ceecd4ab9f335ebf00db0ddc369f5513889fdc44b52e84f058ca732803b5712"} Dec 10 16:05:52 crc kubenswrapper[4727]: E1210 16:05:52.567076 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:05:57 crc kubenswrapper[4727]: E1210 16:05:57.567369 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:05:59 crc kubenswrapper[4727]: I1210 16:05:59.585176 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-r6b6k_f5ae0272-0a54-4f54-99c5-8aa79709f8b7/cert-manager-controller/0.log" Dec 10 16:05:59 crc kubenswrapper[4727]: I1210 16:05:59.786653 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-gwq6p_03818f03-7545-4ec1-9ad4-87ee47095668/cert-manager-cainjector/0.log" Dec 10 16:05:59 crc kubenswrapper[4727]: I1210 16:05:59.839204 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-lqpdt_97e964eb-2a4f-4bd4-8564-ef21f729095b/cert-manager-webhook/0.log" Dec 10 16:06:07 crc kubenswrapper[4727]: E1210 16:06:07.565446 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:06:08 crc kubenswrapper[4727]: E1210 16:06:08.565482 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:06:12 crc kubenswrapper[4727]: I1210 16:06:12.513714 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-kflbl_fa45914b-29fb-49a1-8a3b-f29d4f0dedc2/nmstate-console-plugin/0.log" Dec 10 16:06:12 crc kubenswrapper[4727]: I1210 16:06:12.715832 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sbrdr_cd926915-18e8-430d-8329-a56205b43546/nmstate-handler/0.log" Dec 10 16:06:12 crc kubenswrapper[4727]: I1210 16:06:12.734661 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-6ttgs_40f66a34-f6b1-472c-ae2b-494c4ccb8735/kube-rbac-proxy/0.log" Dec 10 16:06:12 crc kubenswrapper[4727]: I1210 16:06:12.775336 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-6ttgs_40f66a34-f6b1-472c-ae2b-494c4ccb8735/nmstate-metrics/0.log" Dec 10 16:06:12 crc kubenswrapper[4727]: I1210 16:06:12.925954 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-9g8g6_de820475-3267-47cf-8db9-e6294484117c/nmstate-operator/0.log" Dec 10 16:06:12 crc kubenswrapper[4727]: I1210 16:06:12.978737 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-x999q_bee6b7dc-2e3a-4ae1-bb4b-27a411edee96/nmstate-webhook/0.log" Dec 10 16:06:19 crc kubenswrapper[4727]: E1210 16:06:19.565699 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:06:23 crc kubenswrapper[4727]: E1210 16:06:23.565609 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.691270 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2wlkt"] Dec 10 16:06:25 crc kubenswrapper[4727]: E1210 16:06:25.692049 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerName="extract-content" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.692062 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerName="extract-content" Dec 10 16:06:25 crc kubenswrapper[4727]: E1210 16:06:25.692089 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerName="extract-utilities" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.692095 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerName="extract-utilities" Dec 10 16:06:25 crc kubenswrapper[4727]: E1210 16:06:25.692124 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerName="registry-server" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.692130 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerName="registry-server" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.692354 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e1ddd4-f48d-45c7-a565-9d1dad1b987a" containerName="registry-server" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.694279 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.714065 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wlkt"] Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.760730 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c99xr\" (UniqueName: \"kubernetes.io/projected/cc408785-8e41-4a34-8ea8-0533bb01626d-kube-api-access-c99xr\") pod \"redhat-operators-2wlkt\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.760831 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-utilities\") pod \"redhat-operators-2wlkt\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.760899 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-catalog-content\") pod \"redhat-operators-2wlkt\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.862530 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c99xr\" (UniqueName: \"kubernetes.io/projected/cc408785-8e41-4a34-8ea8-0533bb01626d-kube-api-access-c99xr\") pod \"redhat-operators-2wlkt\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.862645 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-utilities\") pod \"redhat-operators-2wlkt\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.862692 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-catalog-content\") pod \"redhat-operators-2wlkt\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.863202 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-utilities\") pod \"redhat-operators-2wlkt\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.863249 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-catalog-content\") pod \"redhat-operators-2wlkt\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:25 crc kubenswrapper[4727]: I1210 16:06:25.887518 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c99xr\" (UniqueName: \"kubernetes.io/projected/cc408785-8e41-4a34-8ea8-0533bb01626d-kube-api-access-c99xr\") pod \"redhat-operators-2wlkt\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:26 crc kubenswrapper[4727]: I1210 16:06:26.025399 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:26 crc kubenswrapper[4727]: W1210 16:06:26.902698 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc408785_8e41_4a34_8ea8_0533bb01626d.slice/crio-e135e9b08ddde2155cdce19f251483f29979dcfc28c54fa4e821d52377ffad1b WatchSource:0}: Error finding container e135e9b08ddde2155cdce19f251483f29979dcfc28c54fa4e821d52377ffad1b: Status 404 returned error can't find the container with id e135e9b08ddde2155cdce19f251483f29979dcfc28c54fa4e821d52377ffad1b Dec 10 16:06:26 crc kubenswrapper[4727]: I1210 16:06:26.903764 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wlkt"] Dec 10 16:06:26 crc kubenswrapper[4727]: I1210 16:06:26.968665 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wlkt" event={"ID":"cc408785-8e41-4a34-8ea8-0533bb01626d","Type":"ContainerStarted","Data":"e135e9b08ddde2155cdce19f251483f29979dcfc28c54fa4e821d52377ffad1b"} Dec 10 16:06:27 crc kubenswrapper[4727]: I1210 16:06:27.037782 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-77d49cfc99-tqvhd_df5cd708-bc5a-4188-84d4-10f25154053d/kube-rbac-proxy/0.log" Dec 10 16:06:27 crc kubenswrapper[4727]: I1210 16:06:27.134710 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-77d49cfc99-tqvhd_df5cd708-bc5a-4188-84d4-10f25154053d/manager/0.log" Dec 10 16:06:27 crc kubenswrapper[4727]: I1210 16:06:27.979458 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerID="be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c" exitCode=0 Dec 10 16:06:27 crc kubenswrapper[4727]: I1210 16:06:27.979569 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wlkt" event={"ID":"cc408785-8e41-4a34-8ea8-0533bb01626d","Type":"ContainerDied","Data":"be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c"} Dec 10 16:06:30 crc kubenswrapper[4727]: I1210 16:06:30.001655 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wlkt" event={"ID":"cc408785-8e41-4a34-8ea8-0533bb01626d","Type":"ContainerStarted","Data":"5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7"} Dec 10 16:06:34 crc kubenswrapper[4727]: E1210 16:06:34.565763 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:06:37 crc kubenswrapper[4727]: I1210 16:06:37.088387 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerID="5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7" exitCode=0 Dec 10 16:06:37 crc kubenswrapper[4727]: I1210 16:06:37.088428 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wlkt" event={"ID":"cc408785-8e41-4a34-8ea8-0533bb01626d","Type":"ContainerDied","Data":"5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7"} Dec 10 16:06:38 crc kubenswrapper[4727]: E1210 16:06:38.565177 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:06:40 crc kubenswrapper[4727]: I1210 16:06:40.116525 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wlkt" event={"ID":"cc408785-8e41-4a34-8ea8-0533bb01626d","Type":"ContainerStarted","Data":"f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174"} Dec 10 16:06:40 crc kubenswrapper[4727]: I1210 16:06:40.146423 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2wlkt" podStartSLOduration=4.21018026 podStartE2EDuration="15.146397325s" podCreationTimestamp="2025-12-10 16:06:25 +0000 UTC" firstStartedPulling="2025-12-10 16:06:27.981244192 +0000 UTC m=+5692.176018734" lastFinishedPulling="2025-12-10 16:06:38.917461257 +0000 UTC m=+5703.112235799" observedRunningTime="2025-12-10 16:06:40.133779385 +0000 UTC m=+5704.328553927" watchObservedRunningTime="2025-12-10 16:06:40.146397325 +0000 UTC m=+5704.341171867" Dec 10 16:06:43 crc kubenswrapper[4727]: I1210 16:06:43.860586 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-dmgwc_34bca420-ab68-464d-b96e-631fc55e2b41/kube-rbac-proxy/0.log" Dec 10 16:06:43 crc kubenswrapper[4727]: I1210 16:06:43.959008 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-dmgwc_34bca420-ab68-464d-b96e-631fc55e2b41/controller/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.055412 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-7b7cs_bcd252b1-5939-47ba-99a4-300d504b615f/frr-k8s-webhook-server/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.209113 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-frr-files/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.374882 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-reloader/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.404187 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-frr-files/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.444776 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-metrics/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.502368 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-reloader/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.734257 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-metrics/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.745922 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-frr-files/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.756853 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-metrics/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.763093 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-reloader/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.974359 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-reloader/0.log" Dec 10 16:06:44 crc kubenswrapper[4727]: I1210 16:06:44.996802 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-frr-files/0.log" Dec 10 16:06:45 crc kubenswrapper[4727]: I1210 16:06:45.031115 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/cp-metrics/0.log" Dec 10 16:06:45 crc kubenswrapper[4727]: I1210 16:06:45.150539 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/controller/0.log" Dec 10 16:06:45 crc kubenswrapper[4727]: I1210 16:06:45.237616 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/frr-metrics/0.log" Dec 10 16:06:45 crc kubenswrapper[4727]: I1210 16:06:45.306648 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/kube-rbac-proxy/0.log" Dec 10 16:06:45 crc kubenswrapper[4727]: I1210 16:06:45.752419 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/reloader/0.log" Dec 10 16:06:45 crc kubenswrapper[4727]: I1210 16:06:45.752963 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/kube-rbac-proxy-frr/0.log" Dec 10 16:06:46 crc kubenswrapper[4727]: I1210 16:06:46.025921 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:46 crc kubenswrapper[4727]: I1210 16:06:46.026079 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:46 crc kubenswrapper[4727]: I1210 16:06:46.122342 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7f4c459996-xq7b2_294cf2e6-2528-4fdd-be76-426382e72b19/manager/0.log" Dec 10 16:06:46 crc kubenswrapper[4727]: I1210 16:06:46.241800 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-56d895889c-x5d82_883f25c7-a8aa-47bb-9726-f5fdc14e4952/webhook-server/0.log" Dec 10 16:06:46 crc kubenswrapper[4727]: I1210 16:06:46.389356 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-26pqg_817c95ce-865f-41a5-a7bf-e88c222e8a4a/kube-rbac-proxy/0.log" Dec 10 16:06:46 crc kubenswrapper[4727]: E1210 16:06:46.593254 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:06:46 crc kubenswrapper[4727]: I1210 16:06:46.970558 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xcqhz_a4c9e124-30b6-42a8-9b85-bef3d1836f12/frr/0.log" Dec 10 16:06:47 crc kubenswrapper[4727]: I1210 16:06:47.102311 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2wlkt" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerName="registry-server" probeResult="failure" output=< Dec 10 16:06:47 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Dec 10 16:06:47 crc kubenswrapper[4727]: > Dec 10 16:06:47 crc kubenswrapper[4727]: I1210 16:06:47.208092 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-26pqg_817c95ce-865f-41a5-a7bf-e88c222e8a4a/speaker/0.log" Dec 10 16:06:49 crc kubenswrapper[4727]: E1210 16:06:49.568104 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:06:56 crc kubenswrapper[4727]: I1210 16:06:56.072016 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:56 crc kubenswrapper[4727]: I1210 16:06:56.123539 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:56 crc kubenswrapper[4727]: I1210 16:06:56.892326 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wlkt"] Dec 10 16:06:57 crc kubenswrapper[4727]: I1210 16:06:57.330980 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2wlkt" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerName="registry-server" containerID="cri-o://f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174" gracePeriod=2 Dec 10 16:06:57 crc kubenswrapper[4727]: I1210 16:06:57.880305 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:57 crc kubenswrapper[4727]: I1210 16:06:57.967356 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-utilities\") pod \"cc408785-8e41-4a34-8ea8-0533bb01626d\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " Dec 10 16:06:57 crc kubenswrapper[4727]: I1210 16:06:57.967511 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-catalog-content\") pod \"cc408785-8e41-4a34-8ea8-0533bb01626d\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " Dec 10 16:06:57 crc kubenswrapper[4727]: I1210 16:06:57.967607 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c99xr\" (UniqueName: \"kubernetes.io/projected/cc408785-8e41-4a34-8ea8-0533bb01626d-kube-api-access-c99xr\") pod \"cc408785-8e41-4a34-8ea8-0533bb01626d\" (UID: \"cc408785-8e41-4a34-8ea8-0533bb01626d\") " Dec 10 16:06:57 crc kubenswrapper[4727]: I1210 16:06:57.970353 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-utilities" (OuterVolumeSpecName: "utilities") pod "cc408785-8e41-4a34-8ea8-0533bb01626d" (UID: "cc408785-8e41-4a34-8ea8-0533bb01626d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:06:57 crc kubenswrapper[4727]: I1210 16:06:57.982450 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc408785-8e41-4a34-8ea8-0533bb01626d-kube-api-access-c99xr" (OuterVolumeSpecName: "kube-api-access-c99xr") pod "cc408785-8e41-4a34-8ea8-0533bb01626d" (UID: "cc408785-8e41-4a34-8ea8-0533bb01626d"). InnerVolumeSpecName "kube-api-access-c99xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.070400 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c99xr\" (UniqueName: \"kubernetes.io/projected/cc408785-8e41-4a34-8ea8-0533bb01626d-kube-api-access-c99xr\") on node \"crc\" DevicePath \"\"" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.070668 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.102929 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc408785-8e41-4a34-8ea8-0533bb01626d" (UID: "cc408785-8e41-4a34-8ea8-0533bb01626d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.173229 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc408785-8e41-4a34-8ea8-0533bb01626d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.342114 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerID="f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174" exitCode=0 Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.342173 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wlkt" event={"ID":"cc408785-8e41-4a34-8ea8-0533bb01626d","Type":"ContainerDied","Data":"f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174"} Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.342194 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wlkt" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.342222 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wlkt" event={"ID":"cc408785-8e41-4a34-8ea8-0533bb01626d","Type":"ContainerDied","Data":"e135e9b08ddde2155cdce19f251483f29979dcfc28c54fa4e821d52377ffad1b"} Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.342251 4727 scope.go:117] "RemoveContainer" containerID="f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.380977 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wlkt"] Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.389365 4727 scope.go:117] "RemoveContainer" containerID="5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.398336 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2wlkt"] Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.427625 4727 scope.go:117] "RemoveContainer" containerID="be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.482999 4727 scope.go:117] "RemoveContainer" containerID="f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174" Dec 10 16:06:58 crc kubenswrapper[4727]: E1210 16:06:58.483598 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174\": container with ID starting with f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174 not found: ID does not exist" containerID="f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.483633 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174"} err="failed to get container status \"f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174\": rpc error: code = NotFound desc = could not find container \"f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174\": container with ID starting with f60194e5fb983d4404fe0fe060a5ea325338cd923c967b395a1596ce44825174 not found: ID does not exist" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.483656 4727 scope.go:117] "RemoveContainer" containerID="5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7" Dec 10 16:06:58 crc kubenswrapper[4727]: E1210 16:06:58.484209 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7\": container with ID starting with 5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7 not found: ID does not exist" containerID="5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.484260 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7"} err="failed to get container status \"5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7\": rpc error: code = NotFound desc = could not find container \"5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7\": container with ID starting with 5e46e143c125c5827be15ace76af3f11043dcbcf477bb0751ee003e7771e5dd7 not found: ID does not exist" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.484294 4727 scope.go:117] "RemoveContainer" containerID="be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c" Dec 10 16:06:58 crc kubenswrapper[4727]: E1210 16:06:58.484580 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c\": container with ID starting with be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c not found: ID does not exist" containerID="be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.484608 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c"} err="failed to get container status \"be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c\": rpc error: code = NotFound desc = could not find container \"be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c\": container with ID starting with be3dee2cf6ea1ec64d93d97918d909cf3ef97aff81c5609a104440419abfd39c not found: ID does not exist" Dec 10 16:06:58 crc kubenswrapper[4727]: I1210 16:06:58.577124 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" path="/var/lib/kubelet/pods/cc408785-8e41-4a34-8ea8-0533bb01626d/volumes" Dec 10 16:07:00 crc kubenswrapper[4727]: E1210 16:07:00.566177 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:07:02 crc kubenswrapper[4727]: I1210 16:07:02.177883 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2_af932d9d-d878-4924-a389-19fea975fe84/util/0.log" Dec 10 16:07:02 crc kubenswrapper[4727]: I1210 16:07:02.356921 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2_af932d9d-d878-4924-a389-19fea975fe84/pull/0.log" Dec 10 16:07:02 crc kubenswrapper[4727]: I1210 16:07:02.357390 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2_af932d9d-d878-4924-a389-19fea975fe84/pull/0.log" Dec 10 16:07:02 crc kubenswrapper[4727]: I1210 16:07:02.403221 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2_af932d9d-d878-4924-a389-19fea975fe84/util/0.log" Dec 10 16:07:02 crc kubenswrapper[4727]: I1210 16:07:02.899340 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2_af932d9d-d878-4924-a389-19fea975fe84/util/0.log" Dec 10 16:07:02 crc kubenswrapper[4727]: I1210 16:07:02.958092 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2_af932d9d-d878-4924-a389-19fea975fe84/extract/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.056016 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae7735hjx2_af932d9d-d878-4924-a389-19fea975fe84/pull/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.163676 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz_f54db9e5-372c-4dda-a4f8-2802f691871c/util/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.368369 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz_f54db9e5-372c-4dda-a4f8-2802f691871c/util/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.373695 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz_f54db9e5-372c-4dda-a4f8-2802f691871c/pull/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.408047 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz_f54db9e5-372c-4dda-a4f8-2802f691871c/pull/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: E1210 16:07:03.567609 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.585183 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz_f54db9e5-372c-4dda-a4f8-2802f691871c/util/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.585518 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz_f54db9e5-372c-4dda-a4f8-2802f691871c/pull/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.598837 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fqdkwz_f54db9e5-372c-4dda-a4f8-2802f691871c/extract/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.747603 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf_3a8f90ea-a6d0-4ea4-8573-2ea50493e86e/util/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.895225 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf_3a8f90ea-a6d0-4ea4-8573-2ea50493e86e/util/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.912669 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf_3a8f90ea-a6d0-4ea4-8573-2ea50493e86e/pull/0.log" Dec 10 16:07:03 crc kubenswrapper[4727]: I1210 16:07:03.946040 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf_3a8f90ea-a6d0-4ea4-8573-2ea50493e86e/pull/0.log" Dec 10 16:07:04 crc kubenswrapper[4727]: I1210 16:07:04.688953 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf_3a8f90ea-a6d0-4ea4-8573-2ea50493e86e/util/0.log" Dec 10 16:07:04 crc kubenswrapper[4727]: I1210 16:07:04.718265 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf_3a8f90ea-a6d0-4ea4-8573-2ea50493e86e/pull/0.log" Dec 10 16:07:04 crc kubenswrapper[4727]: I1210 16:07:04.743020 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210wplxf_3a8f90ea-a6d0-4ea4-8573-2ea50493e86e/extract/0.log" Dec 10 16:07:04 crc kubenswrapper[4727]: I1210 16:07:04.901811 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6_163d48a4-75d1-458f-96a2-18760ac78989/util/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.105793 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6_163d48a4-75d1-458f-96a2-18760ac78989/util/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.142764 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6_163d48a4-75d1-458f-96a2-18760ac78989/pull/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.146596 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6_163d48a4-75d1-458f-96a2-18760ac78989/pull/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.345824 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6_163d48a4-75d1-458f-96a2-18760ac78989/pull/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.369233 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6_163d48a4-75d1-458f-96a2-18760ac78989/util/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.403987 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c146kr6_163d48a4-75d1-458f-96a2-18760ac78989/extract/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.589653 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b_ace248eb-6c0e-465a-a21f-c1b5508cecab/util/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.728127 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b_ace248eb-6c0e-465a-a21f-c1b5508cecab/pull/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.742661 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b_ace248eb-6c0e-465a-a21f-c1b5508cecab/pull/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.757991 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b_ace248eb-6c0e-465a-a21f-c1b5508cecab/util/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.892420 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b_ace248eb-6c0e-465a-a21f-c1b5508cecab/util/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.899556 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b_ace248eb-6c0e-465a-a21f-c1b5508cecab/pull/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.950253 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vxf7b_ace248eb-6c0e-465a-a21f-c1b5508cecab/extract/0.log" Dec 10 16:07:05 crc kubenswrapper[4727]: I1210 16:07:05.953846 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gtb8_01e79a4b-ad5b-4dc0-ab86-650c80fb76b7/extract-utilities/0.log" Dec 10 16:07:06 crc kubenswrapper[4727]: I1210 16:07:06.161209 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gtb8_01e79a4b-ad5b-4dc0-ab86-650c80fb76b7/extract-content/0.log" Dec 10 16:07:06 crc kubenswrapper[4727]: I1210 16:07:06.200865 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gtb8_01e79a4b-ad5b-4dc0-ab86-650c80fb76b7/extract-utilities/0.log" Dec 10 16:07:06 crc kubenswrapper[4727]: I1210 16:07:06.219388 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gtb8_01e79a4b-ad5b-4dc0-ab86-650c80fb76b7/extract-content/0.log" Dec 10 16:07:06 crc kubenswrapper[4727]: I1210 16:07:06.387153 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gtb8_01e79a4b-ad5b-4dc0-ab86-650c80fb76b7/extract-content/0.log" Dec 10 16:07:06 crc kubenswrapper[4727]: I1210 16:07:06.399625 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gtb8_01e79a4b-ad5b-4dc0-ab86-650c80fb76b7/extract-utilities/0.log" Dec 10 16:07:06 crc kubenswrapper[4727]: I1210 16:07:06.500354 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4s6s_7b969113-91be-460a-b1da-dcd546d469c5/extract-utilities/0.log" Dec 10 16:07:06 crc kubenswrapper[4727]: I1210 16:07:06.833548 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4s6s_7b969113-91be-460a-b1da-dcd546d469c5/extract-content/0.log" Dec 10 16:07:06 crc kubenswrapper[4727]: I1210 16:07:06.844260 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4s6s_7b969113-91be-460a-b1da-dcd546d469c5/extract-content/0.log" Dec 10 16:07:06 crc kubenswrapper[4727]: I1210 16:07:06.861646 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4s6s_7b969113-91be-460a-b1da-dcd546d469c5/extract-utilities/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.067931 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gtb8_01e79a4b-ad5b-4dc0-ab86-650c80fb76b7/registry-server/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.082130 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4s6s_7b969113-91be-460a-b1da-dcd546d469c5/extract-utilities/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.108183 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4s6s_7b969113-91be-460a-b1da-dcd546d469c5/extract-content/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.303415 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9v4p4_c0925ef8-5391-40fa-a9a9-898f5dfdec33/marketplace-operator/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.348534 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6w48_b2cc6852-6459-4018-86cf-8bec07f223d7/extract-utilities/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.586469 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6w48_b2cc6852-6459-4018-86cf-8bec07f223d7/extract-utilities/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.588198 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6w48_b2cc6852-6459-4018-86cf-8bec07f223d7/extract-content/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.657685 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6w48_b2cc6852-6459-4018-86cf-8bec07f223d7/extract-content/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.893430 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6w48_b2cc6852-6459-4018-86cf-8bec07f223d7/extract-content/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.921179 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6w48_b2cc6852-6459-4018-86cf-8bec07f223d7/extract-utilities/0.log" Dec 10 16:07:07 crc kubenswrapper[4727]: I1210 16:07:07.974744 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4s6s_7b969113-91be-460a-b1da-dcd546d469c5/registry-server/0.log" Dec 10 16:07:08 crc kubenswrapper[4727]: I1210 16:07:08.208743 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5z8n5_5f5b0a32-1b49-4444-8ebc-6fc35209e8e2/extract-utilities/0.log" Dec 10 16:07:08 crc kubenswrapper[4727]: I1210 16:07:08.295837 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6w48_b2cc6852-6459-4018-86cf-8bec07f223d7/registry-server/0.log" Dec 10 16:07:08 crc kubenswrapper[4727]: I1210 16:07:08.418613 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5z8n5_5f5b0a32-1b49-4444-8ebc-6fc35209e8e2/extract-content/0.log" Dec 10 16:07:08 crc kubenswrapper[4727]: I1210 16:07:08.422112 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5z8n5_5f5b0a32-1b49-4444-8ebc-6fc35209e8e2/extract-utilities/0.log" Dec 10 16:07:08 crc kubenswrapper[4727]: I1210 16:07:08.428535 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5z8n5_5f5b0a32-1b49-4444-8ebc-6fc35209e8e2/extract-content/0.log" Dec 10 16:07:08 crc kubenswrapper[4727]: I1210 16:07:08.658042 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5z8n5_5f5b0a32-1b49-4444-8ebc-6fc35209e8e2/extract-utilities/0.log" Dec 10 16:07:08 crc kubenswrapper[4727]: I1210 16:07:08.677972 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5z8n5_5f5b0a32-1b49-4444-8ebc-6fc35209e8e2/extract-content/0.log" Dec 10 16:07:08 crc kubenswrapper[4727]: I1210 16:07:08.850421 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5z8n5_5f5b0a32-1b49-4444-8ebc-6fc35209e8e2/registry-server/0.log" Dec 10 16:07:14 crc kubenswrapper[4727]: E1210 16:07:14.566697 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:07:16 crc kubenswrapper[4727]: E1210 16:07:16.573262 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:07:22 crc kubenswrapper[4727]: I1210 16:07:22.193735 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-hzsr6_81baa0e2-5696-4291-9455-024cb6f22dd5/prometheus-operator/0.log" Dec 10 16:07:22 crc kubenswrapper[4727]: I1210 16:07:22.414453 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-8hdpw_e6893fbb-598f-492a-83cc-ad8e77e058e8/prometheus-operator-admission-webhook/0.log" Dec 10 16:07:22 crc kubenswrapper[4727]: I1210 16:07:22.448303 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7cfcb8dbbc-lxqdd_173dcd53-e8a8-4f7c-a9f0-e923495d8068/prometheus-operator-admission-webhook/0.log" Dec 10 16:07:22 crc kubenswrapper[4727]: I1210 16:07:22.748175 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-tbccx_10b7d19d-b7d0-4ff2-9e6e-2b82e33c746c/operator/0.log" Dec 10 16:07:22 crc kubenswrapper[4727]: I1210 16:07:22.813765 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-d4j2k_7b7103db-15ac-4e33-89e2-50288a5e12dd/perses-operator/0.log" Dec 10 16:07:29 crc kubenswrapper[4727]: E1210 16:07:29.564964 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:07:30 crc kubenswrapper[4727]: E1210 16:07:30.565346 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:07:37 crc kubenswrapper[4727]: I1210 16:07:37.761289 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-77d49cfc99-tqvhd_df5cd708-bc5a-4188-84d4-10f25154053d/kube-rbac-proxy/0.log" Dec 10 16:07:37 crc kubenswrapper[4727]: I1210 16:07:37.817336 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-77d49cfc99-tqvhd_df5cd708-bc5a-4188-84d4-10f25154053d/manager/0.log" Dec 10 16:07:41 crc kubenswrapper[4727]: E1210 16:07:41.565203 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:07:42 crc kubenswrapper[4727]: E1210 16:07:42.566002 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:07:55 crc kubenswrapper[4727]: E1210 16:07:55.564814 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:07:56 crc kubenswrapper[4727]: E1210 16:07:56.568259 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:08:06 crc kubenswrapper[4727]: E1210 16:08:06.574222 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:08:07 crc kubenswrapper[4727]: I1210 16:08:07.732969 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:08:07 crc kubenswrapper[4727]: I1210 16:08:07.733046 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:08:09 crc kubenswrapper[4727]: E1210 16:08:09.565475 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:08:21 crc kubenswrapper[4727]: E1210 16:08:21.566115 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:08:21 crc kubenswrapper[4727]: E1210 16:08:21.566131 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:08:34 crc kubenswrapper[4727]: E1210 16:08:34.572613 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:08:36 crc kubenswrapper[4727]: E1210 16:08:36.582200 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:08:37 crc kubenswrapper[4727]: I1210 16:08:37.724428 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:08:37 crc kubenswrapper[4727]: I1210 16:08:37.724824 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:08:46 crc kubenswrapper[4727]: E1210 16:08:46.572264 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.349617 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9q57"] Dec 10 16:08:47 crc kubenswrapper[4727]: E1210 16:08:47.350221 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerName="extract-content" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.350247 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerName="extract-content" Dec 10 16:08:47 crc kubenswrapper[4727]: E1210 16:08:47.350321 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerName="extract-utilities" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.350332 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerName="extract-utilities" Dec 10 16:08:47 crc kubenswrapper[4727]: E1210 16:08:47.350380 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerName="registry-server" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.350391 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerName="registry-server" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.350663 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc408785-8e41-4a34-8ea8-0533bb01626d" containerName="registry-server" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.353695 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.363811 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9q57"] Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.388104 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-utilities\") pod \"community-operators-p9q57\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.388342 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvdz4\" (UniqueName: \"kubernetes.io/projected/56596199-5479-42e7-94d4-74f12c670881-kube-api-access-rvdz4\") pod \"community-operators-p9q57\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.388658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-catalog-content\") pod \"community-operators-p9q57\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.491266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvdz4\" (UniqueName: \"kubernetes.io/projected/56596199-5479-42e7-94d4-74f12c670881-kube-api-access-rvdz4\") pod \"community-operators-p9q57\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.493410 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-catalog-content\") pod \"community-operators-p9q57\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.494053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-catalog-content\") pod \"community-operators-p9q57\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.494208 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-utilities\") pod \"community-operators-p9q57\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.494676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-utilities\") pod \"community-operators-p9q57\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.528492 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvdz4\" (UniqueName: \"kubernetes.io/projected/56596199-5479-42e7-94d4-74f12c670881-kube-api-access-rvdz4\") pod \"community-operators-p9q57\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:47 crc kubenswrapper[4727]: I1210 16:08:47.694337 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:48 crc kubenswrapper[4727]: I1210 16:08:48.239999 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9q57"] Dec 10 16:08:48 crc kubenswrapper[4727]: W1210 16:08:48.240123 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56596199_5479_42e7_94d4_74f12c670881.slice/crio-dcbb0da590a2215eb6372b9b429e11475c05af08883d112e24e0ea02f3e86b8e WatchSource:0}: Error finding container dcbb0da590a2215eb6372b9b429e11475c05af08883d112e24e0ea02f3e86b8e: Status 404 returned error can't find the container with id dcbb0da590a2215eb6372b9b429e11475c05af08883d112e24e0ea02f3e86b8e Dec 10 16:08:49 crc kubenswrapper[4727]: I1210 16:08:49.119942 4727 generic.go:334] "Generic (PLEG): container finished" podID="56596199-5479-42e7-94d4-74f12c670881" containerID="7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596" exitCode=0 Dec 10 16:08:49 crc kubenswrapper[4727]: I1210 16:08:49.120017 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9q57" event={"ID":"56596199-5479-42e7-94d4-74f12c670881","Type":"ContainerDied","Data":"7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596"} Dec 10 16:08:49 crc kubenswrapper[4727]: I1210 16:08:49.120298 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9q57" event={"ID":"56596199-5479-42e7-94d4-74f12c670881","Type":"ContainerStarted","Data":"dcbb0da590a2215eb6372b9b429e11475c05af08883d112e24e0ea02f3e86b8e"} Dec 10 16:08:49 crc kubenswrapper[4727]: E1210 16:08:49.566722 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:08:51 crc kubenswrapper[4727]: I1210 16:08:51.149597 4727 generic.go:334] "Generic (PLEG): container finished" podID="56596199-5479-42e7-94d4-74f12c670881" containerID="d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1" exitCode=0 Dec 10 16:08:51 crc kubenswrapper[4727]: I1210 16:08:51.149728 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9q57" event={"ID":"56596199-5479-42e7-94d4-74f12c670881","Type":"ContainerDied","Data":"d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1"} Dec 10 16:08:53 crc kubenswrapper[4727]: I1210 16:08:53.171147 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9q57" event={"ID":"56596199-5479-42e7-94d4-74f12c670881","Type":"ContainerStarted","Data":"d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36"} Dec 10 16:08:53 crc kubenswrapper[4727]: I1210 16:08:53.199206 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9q57" podStartSLOduration=2.923286465 podStartE2EDuration="6.199164017s" podCreationTimestamp="2025-12-10 16:08:47 +0000 UTC" firstStartedPulling="2025-12-10 16:08:49.122079645 +0000 UTC m=+5833.316854187" lastFinishedPulling="2025-12-10 16:08:52.397957197 +0000 UTC m=+5836.592731739" observedRunningTime="2025-12-10 16:08:53.194425427 +0000 UTC m=+5837.389199969" watchObservedRunningTime="2025-12-10 16:08:53.199164017 +0000 UTC m=+5837.393938559" Dec 10 16:08:57 crc kubenswrapper[4727]: I1210 16:08:57.695362 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:57 crc kubenswrapper[4727]: I1210 16:08:57.695803 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:57 crc kubenswrapper[4727]: I1210 16:08:57.743068 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:58 crc kubenswrapper[4727]: I1210 16:08:58.262604 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:08:58 crc kubenswrapper[4727]: I1210 16:08:58.324335 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9q57"] Dec 10 16:08:58 crc kubenswrapper[4727]: E1210 16:08:58.565797 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:09:00 crc kubenswrapper[4727]: I1210 16:09:00.242323 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p9q57" podUID="56596199-5479-42e7-94d4-74f12c670881" containerName="registry-server" containerID="cri-o://d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36" gracePeriod=2 Dec 10 16:09:00 crc kubenswrapper[4727]: I1210 16:09:00.805936 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:09:00 crc kubenswrapper[4727]: I1210 16:09:00.919049 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-catalog-content\") pod \"56596199-5479-42e7-94d4-74f12c670881\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " Dec 10 16:09:00 crc kubenswrapper[4727]: I1210 16:09:00.919183 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvdz4\" (UniqueName: \"kubernetes.io/projected/56596199-5479-42e7-94d4-74f12c670881-kube-api-access-rvdz4\") pod \"56596199-5479-42e7-94d4-74f12c670881\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " Dec 10 16:09:00 crc kubenswrapper[4727]: I1210 16:09:00.919264 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-utilities\") pod \"56596199-5479-42e7-94d4-74f12c670881\" (UID: \"56596199-5479-42e7-94d4-74f12c670881\") " Dec 10 16:09:00 crc kubenswrapper[4727]: I1210 16:09:00.920729 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-utilities" (OuterVolumeSpecName: "utilities") pod "56596199-5479-42e7-94d4-74f12c670881" (UID: "56596199-5479-42e7-94d4-74f12c670881"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:09:00 crc kubenswrapper[4727]: I1210 16:09:00.928163 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56596199-5479-42e7-94d4-74f12c670881-kube-api-access-rvdz4" (OuterVolumeSpecName: "kube-api-access-rvdz4") pod "56596199-5479-42e7-94d4-74f12c670881" (UID: "56596199-5479-42e7-94d4-74f12c670881"). InnerVolumeSpecName "kube-api-access-rvdz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.022236 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvdz4\" (UniqueName: \"kubernetes.io/projected/56596199-5479-42e7-94d4-74f12c670881-kube-api-access-rvdz4\") on node \"crc\" DevicePath \"\"" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.022267 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.080323 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56596199-5479-42e7-94d4-74f12c670881" (UID: "56596199-5479-42e7-94d4-74f12c670881"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.124178 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56596199-5479-42e7-94d4-74f12c670881-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.253827 4727 generic.go:334] "Generic (PLEG): container finished" podID="56596199-5479-42e7-94d4-74f12c670881" containerID="d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36" exitCode=0 Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.253873 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9q57" event={"ID":"56596199-5479-42e7-94d4-74f12c670881","Type":"ContainerDied","Data":"d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36"} Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.253934 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9q57" event={"ID":"56596199-5479-42e7-94d4-74f12c670881","Type":"ContainerDied","Data":"dcbb0da590a2215eb6372b9b429e11475c05af08883d112e24e0ea02f3e86b8e"} Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.253958 4727 scope.go:117] "RemoveContainer" containerID="d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.256347 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9q57" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.274760 4727 scope.go:117] "RemoveContainer" containerID="d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.300218 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9q57"] Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.311828 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p9q57"] Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.314726 4727 scope.go:117] "RemoveContainer" containerID="7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.356358 4727 scope.go:117] "RemoveContainer" containerID="d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36" Dec 10 16:09:01 crc kubenswrapper[4727]: E1210 16:09:01.360115 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36\": container with ID starting with d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36 not found: ID does not exist" containerID="d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.360166 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36"} err="failed to get container status \"d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36\": rpc error: code = NotFound desc = could not find container \"d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36\": container with ID starting with d75790589a91823559c015959c58ca5ca6db15a3e60a2576d225f2f843f02b36 not found: ID does not exist" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.360198 4727 scope.go:117] "RemoveContainer" containerID="d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1" Dec 10 16:09:01 crc kubenswrapper[4727]: E1210 16:09:01.360786 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1\": container with ID starting with d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1 not found: ID does not exist" containerID="d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.360924 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1"} err="failed to get container status \"d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1\": rpc error: code = NotFound desc = could not find container \"d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1\": container with ID starting with d192451251d51931679d30f37b3ff72a3f47c87ee0c5ce1b7b7315564b03d2b1 not found: ID does not exist" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.361027 4727 scope.go:117] "RemoveContainer" containerID="7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596" Dec 10 16:09:01 crc kubenswrapper[4727]: E1210 16:09:01.361522 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596\": container with ID starting with 7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596 not found: ID does not exist" containerID="7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596" Dec 10 16:09:01 crc kubenswrapper[4727]: I1210 16:09:01.361564 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596"} err="failed to get container status \"7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596\": rpc error: code = NotFound desc = could not find container \"7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596\": container with ID starting with 7f553f601fe2d337d4b3c54e999d93a2dafa7a4d7be4598d8eb994f2a989f596 not found: ID does not exist" Dec 10 16:09:02 crc kubenswrapper[4727]: E1210 16:09:02.569060 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:09:02 crc kubenswrapper[4727]: I1210 16:09:02.582359 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56596199-5479-42e7-94d4-74f12c670881" path="/var/lib/kubelet/pods/56596199-5479-42e7-94d4-74f12c670881/volumes" Dec 10 16:09:07 crc kubenswrapper[4727]: I1210 16:09:07.724180 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:09:07 crc kubenswrapper[4727]: I1210 16:09:07.724807 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:09:07 crc kubenswrapper[4727]: I1210 16:09:07.724866 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 16:09:07 crc kubenswrapper[4727]: I1210 16:09:07.725846 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ceecd4ab9f335ebf00db0ddc369f5513889fdc44b52e84f058ca732803b5712"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:09:07 crc kubenswrapper[4727]: I1210 16:09:07.725922 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://2ceecd4ab9f335ebf00db0ddc369f5513889fdc44b52e84f058ca732803b5712" gracePeriod=600 Dec 10 16:09:08 crc kubenswrapper[4727]: I1210 16:09:08.398742 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="2ceecd4ab9f335ebf00db0ddc369f5513889fdc44b52e84f058ca732803b5712" exitCode=0 Dec 10 16:09:08 crc kubenswrapper[4727]: I1210 16:09:08.399027 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"2ceecd4ab9f335ebf00db0ddc369f5513889fdc44b52e84f058ca732803b5712"} Dec 10 16:09:08 crc kubenswrapper[4727]: I1210 16:09:08.399384 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerStarted","Data":"ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463"} Dec 10 16:09:08 crc kubenswrapper[4727]: I1210 16:09:08.399417 4727 scope.go:117] "RemoveContainer" containerID="bab374e67c9ed7c90284ebd2f770e834e871473e647da3231d32dccf483e4363" Dec 10 16:09:12 crc kubenswrapper[4727]: E1210 16:09:12.566580 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:09:16 crc kubenswrapper[4727]: E1210 16:09:16.572751 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:09:25 crc kubenswrapper[4727]: I1210 16:09:25.566130 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:09:25 crc kubenswrapper[4727]: E1210 16:09:25.688835 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:09:25 crc kubenswrapper[4727]: E1210 16:09:25.688899 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:09:25 crc kubenswrapper[4727]: E1210 16:09:25.689076 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsk25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dxhgk_openstack(64cfea48-c6f9-4698-a328-62937b40c2db): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:09:25 crc kubenswrapper[4727]: E1210 16:09:25.690244 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:09:28 crc kubenswrapper[4727]: I1210 16:09:28.638331 4727 scope.go:117] "RemoveContainer" containerID="7ff014e82b7909e228240e0b82054fc85f635fe65ee8d7a31dcfdf9e4ca8b945" Dec 10 16:09:29 crc kubenswrapper[4727]: E1210 16:09:29.566632 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:09:34 crc kubenswrapper[4727]: I1210 16:09:34.684772 4727 generic.go:334] "Generic (PLEG): container finished" podID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" containerID="4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f" exitCode=0 Dec 10 16:09:34 crc kubenswrapper[4727]: I1210 16:09:34.684980 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" event={"ID":"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5","Type":"ContainerDied","Data":"4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f"} Dec 10 16:09:34 crc kubenswrapper[4727]: I1210 16:09:34.686047 4727 scope.go:117] "RemoveContainer" containerID="4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f" Dec 10 16:09:35 crc kubenswrapper[4727]: I1210 16:09:35.245872 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tlvqk_must-gather-kvlgm_56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5/gather/0.log" Dec 10 16:09:40 crc kubenswrapper[4727]: E1210 16:09:40.615947 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:09:42 crc kubenswrapper[4727]: I1210 16:09:42.980031 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tlvqk/must-gather-kvlgm"] Dec 10 16:09:42 crc kubenswrapper[4727]: I1210 16:09:42.980813 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" podUID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" containerName="copy" containerID="cri-o://8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12" gracePeriod=2 Dec 10 16:09:42 crc kubenswrapper[4727]: I1210 16:09:42.993487 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tlvqk/must-gather-kvlgm"] Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.572087 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tlvqk_must-gather-kvlgm_56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5/copy/0.log" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.573051 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.695900 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjqcq\" (UniqueName: \"kubernetes.io/projected/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-kube-api-access-hjqcq\") pod \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\" (UID: \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\") " Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.696004 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-must-gather-output\") pod \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\" (UID: \"56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5\") " Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.714171 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-kube-api-access-hjqcq" (OuterVolumeSpecName: "kube-api-access-hjqcq") pod "56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" (UID: "56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5"). InnerVolumeSpecName "kube-api-access-hjqcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.798549 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjqcq\" (UniqueName: \"kubernetes.io/projected/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-kube-api-access-hjqcq\") on node \"crc\" DevicePath \"\"" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.802185 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tlvqk_must-gather-kvlgm_56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5/copy/0.log" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.802650 4727 generic.go:334] "Generic (PLEG): container finished" podID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" containerID="8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12" exitCode=143 Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.802710 4727 scope.go:117] "RemoveContainer" containerID="8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.802896 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tlvqk/must-gather-kvlgm" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.830976 4727 scope.go:117] "RemoveContainer" containerID="4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.881971 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" (UID: "56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.886989 4727 scope.go:117] "RemoveContainer" containerID="8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12" Dec 10 16:09:43 crc kubenswrapper[4727]: E1210 16:09:43.887507 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12\": container with ID starting with 8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12 not found: ID does not exist" containerID="8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.887567 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12"} err="failed to get container status \"8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12\": rpc error: code = NotFound desc = could not find container \"8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12\": container with ID starting with 8c5d0612e187c1b76427881c9cd78320cc848b8929bcfb6728b671f76e705b12 not found: ID does not exist" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.887622 4727 scope.go:117] "RemoveContainer" containerID="4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f" Dec 10 16:09:43 crc kubenswrapper[4727]: E1210 16:09:43.888072 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f\": container with ID starting with 4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f not found: ID does not exist" containerID="4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.888096 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f"} err="failed to get container status \"4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f\": rpc error: code = NotFound desc = could not find container \"4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f\": container with ID starting with 4e6e93b2720c741dcabfd8a3fa859435523e7290cbf9a4fb17afc9ca823bec3f not found: ID does not exist" Dec 10 16:09:43 crc kubenswrapper[4727]: I1210 16:09:43.902357 4727 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 10 16:09:44 crc kubenswrapper[4727]: I1210 16:09:44.580806 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" path="/var/lib/kubelet/pods/56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5/volumes" Dec 10 16:09:44 crc kubenswrapper[4727]: E1210 16:09:44.696439 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:09:44 crc kubenswrapper[4727]: E1210 16:09:44.696517 4727 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:09:44 crc kubenswrapper[4727]: E1210 16:09:44.696663 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hbh77hf8h5c7h549h86h68h65dh5fbhdfh5c5h5d8h76h5f8hd9h5bch59bh8ch5f9h659h67dh6bh69h5b4h659h56bh68h7ch658h568h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(727601cd-934c-4d0d-b32e-c66a80adbb9f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:09:44 crc kubenswrapper[4727]: E1210 16:09:44.697870 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:09:52 crc kubenswrapper[4727]: E1210 16:09:52.567655 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:09:59 crc kubenswrapper[4727]: E1210 16:09:59.565488 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:10:04 crc kubenswrapper[4727]: E1210 16:10:04.565630 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:10:14 crc kubenswrapper[4727]: E1210 16:10:14.566228 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:10:15 crc kubenswrapper[4727]: E1210 16:10:15.570348 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:10:26 crc kubenswrapper[4727]: E1210 16:10:26.704318 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:10:29 crc kubenswrapper[4727]: E1210 16:10:29.565320 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:10:41 crc kubenswrapper[4727]: E1210 16:10:41.567133 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:10:43 crc kubenswrapper[4727]: E1210 16:10:43.566171 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.791084 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ctzwf"] Dec 10 16:10:50 crc kubenswrapper[4727]: E1210 16:10:50.792209 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" containerName="gather" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.792229 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" containerName="gather" Dec 10 16:10:50 crc kubenswrapper[4727]: E1210 16:10:50.792252 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56596199-5479-42e7-94d4-74f12c670881" containerName="registry-server" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.792259 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="56596199-5479-42e7-94d4-74f12c670881" containerName="registry-server" Dec 10 16:10:50 crc kubenswrapper[4727]: E1210 16:10:50.792271 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56596199-5479-42e7-94d4-74f12c670881" containerName="extract-content" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.792278 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="56596199-5479-42e7-94d4-74f12c670881" containerName="extract-content" Dec 10 16:10:50 crc kubenswrapper[4727]: E1210 16:10:50.792319 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" containerName="copy" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.792324 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" containerName="copy" Dec 10 16:10:50 crc kubenswrapper[4727]: E1210 16:10:50.792337 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56596199-5479-42e7-94d4-74f12c670881" containerName="extract-utilities" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.792343 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="56596199-5479-42e7-94d4-74f12c670881" containerName="extract-utilities" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.792552 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="56596199-5479-42e7-94d4-74f12c670881" containerName="registry-server" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.792578 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" containerName="gather" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.792588 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9364f-7b6a-47ce-b3b3-d4c823d0c6f5" containerName="copy" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.794380 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.801570 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ctzwf"] Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.894292 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-catalog-content\") pod \"certified-operators-ctzwf\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.894406 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-utilities\") pod \"certified-operators-ctzwf\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.894556 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cz5v\" (UniqueName: \"kubernetes.io/projected/7d202d4f-aba3-4302-8bef-e1653231a22a-kube-api-access-5cz5v\") pod \"certified-operators-ctzwf\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.995978 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-utilities\") pod \"certified-operators-ctzwf\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.996098 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cz5v\" (UniqueName: \"kubernetes.io/projected/7d202d4f-aba3-4302-8bef-e1653231a22a-kube-api-access-5cz5v\") pod \"certified-operators-ctzwf\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.996195 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-catalog-content\") pod \"certified-operators-ctzwf\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.996621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-catalog-content\") pod \"certified-operators-ctzwf\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:50 crc kubenswrapper[4727]: I1210 16:10:50.996834 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-utilities\") pod \"certified-operators-ctzwf\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:51 crc kubenswrapper[4727]: I1210 16:10:51.026710 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cz5v\" (UniqueName: \"kubernetes.io/projected/7d202d4f-aba3-4302-8bef-e1653231a22a-kube-api-access-5cz5v\") pod \"certified-operators-ctzwf\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:51 crc kubenswrapper[4727]: I1210 16:10:51.117986 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:10:51 crc kubenswrapper[4727]: I1210 16:10:51.648615 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ctzwf"] Dec 10 16:10:52 crc kubenswrapper[4727]: I1210 16:10:52.008172 4727 generic.go:334] "Generic (PLEG): container finished" podID="7d202d4f-aba3-4302-8bef-e1653231a22a" containerID="1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642" exitCode=0 Dec 10 16:10:52 crc kubenswrapper[4727]: I1210 16:10:52.008288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctzwf" event={"ID":"7d202d4f-aba3-4302-8bef-e1653231a22a","Type":"ContainerDied","Data":"1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642"} Dec 10 16:10:52 crc kubenswrapper[4727]: I1210 16:10:52.008520 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctzwf" event={"ID":"7d202d4f-aba3-4302-8bef-e1653231a22a","Type":"ContainerStarted","Data":"9d1dd51b15ab05c1028985c2ab883a384fb298b082c7bcb4a5d764970ef4465f"} Dec 10 16:10:54 crc kubenswrapper[4727]: I1210 16:10:54.030308 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctzwf" event={"ID":"7d202d4f-aba3-4302-8bef-e1653231a22a","Type":"ContainerStarted","Data":"5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d"} Dec 10 16:10:55 crc kubenswrapper[4727]: I1210 16:10:55.043684 4727 generic.go:334] "Generic (PLEG): container finished" podID="7d202d4f-aba3-4302-8bef-e1653231a22a" containerID="5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d" exitCode=0 Dec 10 16:10:55 crc kubenswrapper[4727]: I1210 16:10:55.043724 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctzwf" event={"ID":"7d202d4f-aba3-4302-8bef-e1653231a22a","Type":"ContainerDied","Data":"5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d"} Dec 10 16:10:56 crc kubenswrapper[4727]: I1210 16:10:56.056506 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctzwf" event={"ID":"7d202d4f-aba3-4302-8bef-e1653231a22a","Type":"ContainerStarted","Data":"4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918"} Dec 10 16:10:56 crc kubenswrapper[4727]: I1210 16:10:56.080523 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ctzwf" podStartSLOduration=2.368180003 podStartE2EDuration="6.080504564s" podCreationTimestamp="2025-12-10 16:10:50 +0000 UTC" firstStartedPulling="2025-12-10 16:10:52.009931538 +0000 UTC m=+5956.204706080" lastFinishedPulling="2025-12-10 16:10:55.722256089 +0000 UTC m=+5959.917030641" observedRunningTime="2025-12-10 16:10:56.076631576 +0000 UTC m=+5960.271406118" watchObservedRunningTime="2025-12-10 16:10:56.080504564 +0000 UTC m=+5960.275279096" Dec 10 16:10:56 crc kubenswrapper[4727]: E1210 16:10:56.583126 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:10:57 crc kubenswrapper[4727]: E1210 16:10:57.564456 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:11:01 crc kubenswrapper[4727]: I1210 16:11:01.118361 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:11:01 crc kubenswrapper[4727]: I1210 16:11:01.118989 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:11:01 crc kubenswrapper[4727]: I1210 16:11:01.193093 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:11:02 crc kubenswrapper[4727]: I1210 16:11:02.190072 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:11:02 crc kubenswrapper[4727]: I1210 16:11:02.245541 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ctzwf"] Dec 10 16:11:04 crc kubenswrapper[4727]: I1210 16:11:04.140999 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ctzwf" podUID="7d202d4f-aba3-4302-8bef-e1653231a22a" containerName="registry-server" containerID="cri-o://4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918" gracePeriod=2 Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.150567 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.151675 4727 generic.go:334] "Generic (PLEG): container finished" podID="7d202d4f-aba3-4302-8bef-e1653231a22a" containerID="4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918" exitCode=0 Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.151724 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctzwf" event={"ID":"7d202d4f-aba3-4302-8bef-e1653231a22a","Type":"ContainerDied","Data":"4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918"} Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.151762 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctzwf" event={"ID":"7d202d4f-aba3-4302-8bef-e1653231a22a","Type":"ContainerDied","Data":"9d1dd51b15ab05c1028985c2ab883a384fb298b082c7bcb4a5d764970ef4465f"} Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.151781 4727 scope.go:117] "RemoveContainer" containerID="4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.170642 4727 scope.go:117] "RemoveContainer" containerID="5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.213377 4727 scope.go:117] "RemoveContainer" containerID="1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.248849 4727 scope.go:117] "RemoveContainer" containerID="4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918" Dec 10 16:11:05 crc kubenswrapper[4727]: E1210 16:11:05.250255 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918\": container with ID starting with 4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918 not found: ID does not exist" containerID="4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.250328 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918"} err="failed to get container status \"4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918\": rpc error: code = NotFound desc = could not find container \"4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918\": container with ID starting with 4dd68691a09c53a0de5d643a632e3c4fb0995e184f32a3b868097b6d9e72e918 not found: ID does not exist" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.250363 4727 scope.go:117] "RemoveContainer" containerID="5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d" Dec 10 16:11:05 crc kubenswrapper[4727]: E1210 16:11:05.250722 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d\": container with ID starting with 5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d not found: ID does not exist" containerID="5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.250758 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d"} err="failed to get container status \"5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d\": rpc error: code = NotFound desc = could not find container \"5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d\": container with ID starting with 5b9485588938e1d8bbf0eed4918826e5787ec2bd2e1f669011cbc25eae33270d not found: ID does not exist" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.250787 4727 scope.go:117] "RemoveContainer" containerID="1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642" Dec 10 16:11:05 crc kubenswrapper[4727]: E1210 16:11:05.251279 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642\": container with ID starting with 1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642 not found: ID does not exist" containerID="1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.251329 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642"} err="failed to get container status \"1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642\": rpc error: code = NotFound desc = could not find container \"1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642\": container with ID starting with 1621da32aa2c9be2b376b7ac68f56578add368aaa01010431cc888d159c21642 not found: ID does not exist" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.352033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cz5v\" (UniqueName: \"kubernetes.io/projected/7d202d4f-aba3-4302-8bef-e1653231a22a-kube-api-access-5cz5v\") pod \"7d202d4f-aba3-4302-8bef-e1653231a22a\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.352239 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-utilities\") pod \"7d202d4f-aba3-4302-8bef-e1653231a22a\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.352373 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-catalog-content\") pod \"7d202d4f-aba3-4302-8bef-e1653231a22a\" (UID: \"7d202d4f-aba3-4302-8bef-e1653231a22a\") " Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.353241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-utilities" (OuterVolumeSpecName: "utilities") pod "7d202d4f-aba3-4302-8bef-e1653231a22a" (UID: "7d202d4f-aba3-4302-8bef-e1653231a22a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.372513 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d202d4f-aba3-4302-8bef-e1653231a22a-kube-api-access-5cz5v" (OuterVolumeSpecName: "kube-api-access-5cz5v") pod "7d202d4f-aba3-4302-8bef-e1653231a22a" (UID: "7d202d4f-aba3-4302-8bef-e1653231a22a"). InnerVolumeSpecName "kube-api-access-5cz5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.405315 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d202d4f-aba3-4302-8bef-e1653231a22a" (UID: "7d202d4f-aba3-4302-8bef-e1653231a22a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.454658 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.454694 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d202d4f-aba3-4302-8bef-e1653231a22a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:05 crc kubenswrapper[4727]: I1210 16:11:05.454706 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cz5v\" (UniqueName: \"kubernetes.io/projected/7d202d4f-aba3-4302-8bef-e1653231a22a-kube-api-access-5cz5v\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:06 crc kubenswrapper[4727]: I1210 16:11:06.162887 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctzwf" Dec 10 16:11:06 crc kubenswrapper[4727]: I1210 16:11:06.209502 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ctzwf"] Dec 10 16:11:06 crc kubenswrapper[4727]: I1210 16:11:06.223172 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ctzwf"] Dec 10 16:11:06 crc kubenswrapper[4727]: I1210 16:11:06.574453 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d202d4f-aba3-4302-8bef-e1653231a22a" path="/var/lib/kubelet/pods/7d202d4f-aba3-4302-8bef-e1653231a22a/volumes" Dec 10 16:11:08 crc kubenswrapper[4727]: E1210 16:11:08.566235 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:11:12 crc kubenswrapper[4727]: E1210 16:11:12.566722 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:11:21 crc kubenswrapper[4727]: E1210 16:11:21.566831 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:11:27 crc kubenswrapper[4727]: E1210 16:11:27.565948 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:11:32 crc kubenswrapper[4727]: E1210 16:11:32.565525 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:11:37 crc kubenswrapper[4727]: I1210 16:11:37.724177 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:11:37 crc kubenswrapper[4727]: I1210 16:11:37.724698 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:11:38 crc kubenswrapper[4727]: E1210 16:11:38.566265 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:11:46 crc kubenswrapper[4727]: E1210 16:11:46.577878 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:11:52 crc kubenswrapper[4727]: E1210 16:11:52.566864 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:11:57 crc kubenswrapper[4727]: E1210 16:11:57.565264 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:12:03 crc kubenswrapper[4727]: E1210 16:12:03.565736 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:12:07 crc kubenswrapper[4727]: I1210 16:12:07.723993 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:12:07 crc kubenswrapper[4727]: I1210 16:12:07.724540 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:12:09 crc kubenswrapper[4727]: E1210 16:12:09.565780 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:12:17 crc kubenswrapper[4727]: E1210 16:12:17.565991 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:12:22 crc kubenswrapper[4727]: E1210 16:12:22.565992 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:12:28 crc kubenswrapper[4727]: E1210 16:12:28.565942 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:12:36 crc kubenswrapper[4727]: E1210 16:12:36.572726 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:12:37 crc kubenswrapper[4727]: I1210 16:12:37.723924 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5kj8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:12:37 crc kubenswrapper[4727]: I1210 16:12:37.724282 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:12:37 crc kubenswrapper[4727]: I1210 16:12:37.724333 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" Dec 10 16:12:37 crc kubenswrapper[4727]: I1210 16:12:37.725346 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463"} pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:12:37 crc kubenswrapper[4727]: I1210 16:12:37.725422 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerName="machine-config-daemon" containerID="cri-o://ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463" gracePeriod=600 Dec 10 16:12:38 crc kubenswrapper[4727]: E1210 16:12:38.378922 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:12:38 crc kubenswrapper[4727]: I1210 16:12:38.607864 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" containerID="ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463" exitCode=0 Dec 10 16:12:38 crc kubenswrapper[4727]: I1210 16:12:38.607937 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" event={"ID":"fe1deb75-3aeb-4657-a335-fd4c02a2a513","Type":"ContainerDied","Data":"ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463"} Dec 10 16:12:38 crc kubenswrapper[4727]: I1210 16:12:38.607999 4727 scope.go:117] "RemoveContainer" containerID="2ceecd4ab9f335ebf00db0ddc369f5513889fdc44b52e84f058ca732803b5712" Dec 10 16:12:38 crc kubenswrapper[4727]: I1210 16:12:38.609045 4727 scope.go:117] "RemoveContainer" containerID="ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463" Dec 10 16:12:38 crc kubenswrapper[4727]: E1210 16:12:38.609567 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:12:42 crc kubenswrapper[4727]: E1210 16:12:42.565219 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:12:51 crc kubenswrapper[4727]: I1210 16:12:51.563799 4727 scope.go:117] "RemoveContainer" containerID="ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463" Dec 10 16:12:51 crc kubenswrapper[4727]: E1210 16:12:51.564597 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:12:51 crc kubenswrapper[4727]: E1210 16:12:51.566129 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:12:56 crc kubenswrapper[4727]: E1210 16:12:56.571883 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:13:04 crc kubenswrapper[4727]: I1210 16:13:04.563882 4727 scope.go:117] "RemoveContainer" containerID="ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463" Dec 10 16:13:04 crc kubenswrapper[4727]: E1210 16:13:04.564686 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:13:06 crc kubenswrapper[4727]: E1210 16:13:06.573718 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:13:08 crc kubenswrapper[4727]: E1210 16:13:08.564717 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:13:18 crc kubenswrapper[4727]: I1210 16:13:18.563564 4727 scope.go:117] "RemoveContainer" containerID="ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463" Dec 10 16:13:18 crc kubenswrapper[4727]: E1210 16:13:18.564632 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:13:19 crc kubenswrapper[4727]: E1210 16:13:19.566058 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:13:21 crc kubenswrapper[4727]: E1210 16:13:21.565502 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:13:30 crc kubenswrapper[4727]: I1210 16:13:30.563496 4727 scope.go:117] "RemoveContainer" containerID="ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463" Dec 10 16:13:30 crc kubenswrapper[4727]: E1210 16:13:30.564351 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:13:30 crc kubenswrapper[4727]: E1210 16:13:30.568145 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:13:32 crc kubenswrapper[4727]: E1210 16:13:32.568273 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:13:41 crc kubenswrapper[4727]: I1210 16:13:41.564502 4727 scope.go:117] "RemoveContainer" containerID="ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463" Dec 10 16:13:41 crc kubenswrapper[4727]: E1210 16:13:41.565572 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:13:41 crc kubenswrapper[4727]: E1210 16:13:41.566817 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:13:45 crc kubenswrapper[4727]: E1210 16:13:45.565116 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db" Dec 10 16:13:55 crc kubenswrapper[4727]: E1210 16:13:55.566182 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="727601cd-934c-4d0d-b32e-c66a80adbb9f" Dec 10 16:13:56 crc kubenswrapper[4727]: I1210 16:13:56.572926 4727 scope.go:117] "RemoveContainer" containerID="ca0882b4a16abbfc402f48a0de802bc0d82fd99e6ea11ed0f352230d7ab1d463" Dec 10 16:13:56 crc kubenswrapper[4727]: E1210 16:13:56.573501 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5kj8v_openshift-machine-config-operator(fe1deb75-3aeb-4657-a335-fd4c02a2a513)\"" pod="openshift-machine-config-operator/machine-config-daemon-5kj8v" podUID="fe1deb75-3aeb-4657-a335-fd4c02a2a513" Dec 10 16:13:59 crc kubenswrapper[4727]: E1210 16:13:59.567464 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-dxhgk" podUID="64cfea48-c6f9-4698-a328-62937b40c2db"